Autonomous and connected vehicles: can they be used in India?

Autonomous and connected vehicles: can they be used in India? 


Author: Srivijay R Sastry
3rd year,
School of Law, Christ (Deemed to be University).

Abstract

Autonomous vehicles develop new paths for mobility and are acknowledged to have economic and societal benefits, but there are concerns regarding the extent of their benefits and their unintended consequences. As with all new technologies, appropriate governance strategies can help maximize the potential benefits associated with the rapid development of AVs and minimize the risks often associated with technological disruption and negative and/or unintended consequences. The concern, however, remains about the capacity of governments in the timely management of wider societal implications.


Implementing self-driving vehicles has various other implications on the economy and on the insurance sector. Adoption of autonomous technology should reduce the rate of accidents and, in theory, put downward pressure on premiums. But premiums won’t go to zero. The technology will not eliminate accidents altogether and there will still be a need for insurance. The automotive industry as a whole and the auto insurance business specifically, is on the verge of a makeover, and business models are adjusting based on autonomous vehicle technology. Positioning properly to avoid pending pricing pressures and capitalizing on nascent demand and new revenue streams is critical.

Further, autonomous vehicles have a huge impact on the economy of the nation. This impact can be described as a double edged sword. On one hand there is benefit to the economy as this helps in increased production of autonomous vehicles, reduced accidents, significant reduction in pollution, boosting economic growth, increased business for software companies etc.; on the hand, millions of people employed under UBER and OLA, and other private taxi organizations in India and many Black Taxi Cab Owners in the United Kingdom would lose their jobs leading to a significant rise in the unemployment of people.
Through this paper, I will be analyzing the following:
      ·       Imposition of Liability
      ·       Impact on the Insurance Sector
      ·       Impact on the Economy
Post analysis of the above mentioned areas; I will have clarity as to whether the technology of autonomous vehicles can be implemented in India.

Introduction

Using Radio Detection and Ranging (herein known as RADAR), Laser Illumination Detection and Ranging (herein known as LIDAR) and Sound Navigation and Ranging (herein known as SONAR), cars emit radiation to detect and sense the environment and their surroundings. The laser light reflects off surroundings and returns to the self-driving car, allowing the car to figure out where everything is.
This sweeping action is so rapid that it can tell the car the deference between a car, a truck, a bicycle, a pedestrian or a building. The different RADAR, LIDAR and SONAR sensors are fitted in multiple parts of the car so that the self-driving car has a three hundred and sixty degree view of its surroundings. [1]  Included in our surroundings are road signs such as stop signs and speed limit signs; signal lights and whether they’re green, yellow, or red; warning lights for construction sites; those flashing colored lights on emergency vehicles; and that cop that wants you to pull to the side of the road, etc are some areas where the self-driving cars have to pay attention to.  Further, the cars have to pay attention to how fast other cars are moving, whether they’re slowing down or speeding up, and whether they’re moving toward you, away from you, or in some other direction. They have also concentrate on the road signs, the route maps, the directions and deviations, etc. to ensure that they reach their destination in without having any hiccups.
Having understood how these technologies work, it is now time to see of this technology can be implemented in the country by analyzing the three main barriers. They are:
      ·       Imposition of liability
      ·       Impact on the Insurance Sector
      ·       Impact on the Economy
Imposition of Liability:
As technology has developed, criminal offenses are committed not only by humans. The major development in this issue has occurred in the 17th century. In the 21st century criminal law is required to supply adequate solutions for commission of criminal offenses through artificial intelligent entities. Basically, there are three fundamental models to cope with this phenomenon within the current definitions of criminal law[2].

1.     The Perpetration-by-Another Liability Model:
The first model does not consider any human feature of t AI system. The AI system is considered to be an innocent agent. Accordingly, due to that legal point of view, a machine is a machine, and never human. However, one cannot ignore the capabilities of an AI system. Due to this model, these capabilities are not enough in order to consider the AI system as a perpetrator of an offense. These capabilities might resemble the parallel capabilities of a mentally limited person, such as a child, a mentally incompetent or a person who lacks a criminal state of mind to engage the conduct[3]. Legally, when an offense is committed by an innocent agent, as where a person causes a child, a mentally incompetent[4], or a person who lacks a criminal state of mind to engage the conduct, that person is criminally liable as a perpetrator-by-another[5]. In such cases the intermediary is regarded as a mere instrument, even though it is a sophisticated instrument, and the originating actor (the perpetrator-by-another) is the real perpetrator as a principal of the first degree. That perpetrator-by-another is liable for the conduct of the innocent agent, and the perpetrator liability is determined on the basis of that conduct5 and the perpetrator-by-another own mental state. When the programmers or the users use the AI system instrumentally, the commission of the conduct by the AI system is attributed to them. The internal element required in the specific offense already exists in their mind. The programmer has the intention towards the commission of the arson, and the user intends to the commission of the assault, even though the factual commission of these offenses is committed through a robot, which is an AI system. The instrumental usage of an innocent agent is considered committed the perpetration of the user himself.

2.     The Natural Probable Consequence Liability Model:
The second model of criminal liability assumes a deep involvement of the programmers or the users in the AI system’s daily activities, but they did not plan to commit any offense through the AI system. However, during the execution of its daily missions, the AI system commits an offense. The programmers or the users did not know about the commission of the offense until it has already been done, they did not plan any commission of any offense, and they did not participated in any part of the commission of
that specific offense.

According to the second model, a person might be held liable for an offense, since that offense is a natural and probable consequence of that person conduct. Originally, the natural probable consequence liability is used to impose criminal liability upon accomplices, that one of them has committed an offense which was not planned by them all and which was not part of the conspiracy. The established rule stated by courts and commentators is that accomplice liability extends to acts of the perpetrator which were a “natural and probable consequence”[6]of a criminal scheme the accomplice encouraged or aided[7]. The natural probable consequence liability has been widely accepted in accomplice liability statutes and recodifications[8].

As a result, according to the natural probable consequence liability model, when the programmers or users programmed or used the AI system knowingly and willfully in order to commit one offense through the AI system, but the AI system swerved the plan and committed any other offense in addition to the planed offense or instead of it, the programmers or users shall be liable for the offense itself as if it had been committed knowingly and willfully. In the above example of the robbery, the programmer shall be criminally liable for the robbery (if committed) and for the killing as an offense of manslaughter or murder, which require either knowledge or intent[9].
                                                                     
3.     Th­e Direct Liability Model

The third model does not assume any dependence of the AI system in a specific programmer or user. The third model focuses on the AI system itself, and enables to derivate the outer circles’ criminal liability more accurately[10]. Criminal liability for a specific offense is mainly combined out of the external element and the internal element of that offense. Any person, that both elements of the specific offense are attributed to, is held criminally liable for that specific offense. No other qualifications are required in order to impose criminal liability. A person might possess further capabilities, but in order to impose criminal liability the existence of the external element and the internal element required in the specific offense is quite enough. In order to impose criminal liability upon any kind of entity, these requirements should be proven as existed in the specific entity. When it is proven that a person committed the relevant conduct accompanied with relevant knowledge or intention, the person is criminally liable due to the specific offense. The relevant question towards the criminal liability of AI systems is how these entities might formulate these relevant requirements of criminal liability.
An AI algorithm might have very many features and qualifications, which might be much higher than those of an average human. But, no such features or qualifications are required in order to impose criminal liability. When a human or corporation formulated both the external element and the internal element, a criminal liability is imposed. If an AI system has the capability to formulate both external element and internal element, and in fact it really formulates it, there is nothing to prevent the criminal liability from being imposed upon that AI system. Most cognitive capabilities developed in the modern AI technology are immaterial for the question of the criminal liability imposition. Creativity is a human feature that humans do share with some animals, but creativity is not required in order to impose criminal liability. Even the most uncreative persons may be criminally liable. All mental requirements that are required in order to impose criminal liability are knowledge, intent, negligence etc., as required in the specific offense and under the general theory of criminal law.
  
Analyzing the three liability models, it is evident, at least to the best of my knowledge that the perpetrator by another model is the model of liability applicable to the autonomous and connected vehicles.  As far as autonomous vehicles go, level-four autonomy is in place. This model can completely drive itself but, it requires human interference due to the current laws in place. In this model, the driver gives command to the vehicle to operate on auto pilot and then the car self-drives and requires a human intervention at specified intervals. If this intervention is not adhered to, the car will automatically come to a completely stop right at that place which could also be in the middle of the middle of the road. In such cases due to abrupt decisions made by the system in the vehicle, the driver/user of that vehicle should be held liable because he failed to intervene in and by virtue of that action, an accident occurred.

In the future, when completely autonomous vehicles arrive, their programs are based on driving decision models. These models are based on driving decisions of some people who they think are nearly perfect drivers. What is to be noted here is that these nearly perfect drivers also make mistakes and these mistakes ae those which cannot be predicted by the programmers as well. Further, the autonomous vehicle cannot learn from itself as there is no second chance nor there is a chance of learning from a mistake. A wrong decision of the autonomous vehicle can lead to a very major accident which can put the life of the public in danger.

Deliberating on this topic, it can be concluded that the user in case of Level 4 autonomous driving and programmer in case of Level 5 autonomous driving are to be held liable. This is because in autonomous vehicles, there is no scope for these to learn to control themselves everything from acceleration to turning to breaking to stability control is all programmed and  they follow command and execute the functions. If there is anything new that these vehicles has to learn, it has to be done through a software update only that can come from the manufacturer.

This is the reason why I am of the opinion that manufactures/programmers in case of Level 5 autonomous vehicles and users in case of Level 4 autonomous vehicles should be held liable.
Impact on Economy:
The AV study is based on reports and research conducted by Montgomery and two other economists: Erica Groshen, former U.S. Bureau of Labor Statistics commissioner and Richard Mudge, CEO of Compass Transportation and Technology. Traffic congestion could be reduced because of fewer accidents, with AVs allowing for “much smoother traffic flow,” said Montgomery. “Intersections and mergers would no longer produce the kind of accordion effects, and AVs would also allow the packing of more vehicles into a smaller amount of space.” However, other experts raised the possibility that AVs could also have the unintended effect of putting more cars on the road and increasing congestion, said Susan Helper, former chief economist, U.S. Department of Commerce. Dozens of auto and mobility companies are involved in some form of AV development, ranging from the testing of small electric shuttles in cities like Las Vegas, to Ford’s latest move to purchase the abandoned Michigan Central Station in Detroit to be restored and redeveloped as the company’s campus for self-driving car research and development, as well as other mobility endeavors like ride-hailing services. 

Analyzing it from an Indian perspective, it is a boom to the software and techno-based firms as they can contribute immensely to the AV sector. Software plays a quint-essential role in these vehicles and given that there are a lot of software firms in our country, implementing this technology should not be a problem. Along with the boosting of the economy, these vehicles help in reducing the accidents that are caused by human judgement error as the models on which these are built are based on Driving Decision Model and this includes the mistakes of drivers. The Indian Taxi Market has gained prominence over last 2-3 years, in the backdrop of entry of app based aggregators which has not only disrupted auto-rickshaw and traditional taxi market but also forced automotive OEMs to revise their growth and marketing strategies. A report released by ICRANSE -3.48 %[11]stated that the Indian passenger vehicle industry is likely to ride on the strong growth potential of domestic taxi segment in the near term, whereas medium to long term growth will be supported by low car penetration level and increasing Income Level. “Considering strong demand and increasing penetration of such players in smaller town, Indian taxi market is poised for robust double digit growth over next 2-3 years. Indian taxi market is still in nascent stage, with huge scope for growth given low car penetration level and poor public infrastructure. OEMs have also realized the market potential and have dedicated sales team to cater fleet operators.

The entry of aggregators like Ola and Uber has changed competitive dynamics of taxi market which till now has largely been fragmented and has lacked bargaining power with OEMs with even larger organized fleet operators typically having less than 500 cars. However, the aggregators like Ola and Uber have over 25,000 cabs in NCR market alone. Consequently, this segment within PV industry now enjoys a significant clout, with some OEMs setting up dedicated team to address aggregator market,” observed the report.

Given how significant the growth of the taxi market is, it is important to consider the down sides of implementing the technology of autonomous and connected vehicles.
The threat of Cyber Terrorism: Every car is has a brain and that brain is known as the Electronic Control Unit (ECU) through which the car operates. In connected vehicles, this ECU is always connected to a server and this server if hacked, exposes vital information of the vehicles and the passengers and can lead to a chain reaction of problems which includes increased accidents, violation of privacy, etc.
The Problem of Unemployment: With the world implementing this technology, there are two main economies that get hit by the problem of unemployment. They are the economies of London and India.  A large number of people in these countries are employed in their respective car driving retail agencies. In case of London it is the very famous B
lack Taxi Operators and in India, they are the drivers of Ola and Uber. Uber, as we speak, is already making contracts with the manufactures of Autonomous Vehicles to try and implement this technology in as many markets as possible. This will lead to massive unemployment and a very practical solution to help fight this situation should be found out before implementing this technology in India.
Impact on the Insurance Sector:
There has been considerable discussion about whether autonomous vehicles will lead to the end of the auto insurance industry. Although this prediction may be extreme, it’s entirely possible that auto insurance will change into something very different than what it is today. [12]

Autonomous features are supposed to make cars safer. You would think this would result in lower car insurance premiums. At least so far, however, this has not played out. The technology is still new and untested. There’s also concern that some people are relying too heavily on autonomous features that have not reached Level 5 or even Level 4 yet. And when crashes do occur, the high-tech features make repairs extremely expensive. 

The below graph shows the increase in premiums on autonomous vehicles over a period of time due to expensive repair cost[13]

It’s not only the cost that will spur change. While crashes will still occur with autonomous vehicles, the cause of these crashes will change. Driver error will become less and less of an issue, while product liability will become more of an issue. This will shift the liability away from drivers and toward manufacturers and service providers. As a result, more auto manufacturers may follow Tesla’s lead and start offering their own insurance. 

The point to be noted here is that the technology used in the autonomous vehicles is very expensive to repair, should a vehicle involve itself in an accident. Because of how expensive these repairs are, the premiums on these insurance policies will see a significant raise. This is also the reason why autonomous vehicle manufacturer Tesla offers its own insurance policy because the premiums from the everyday insurance providers were shockingly high.

This is a serious issue to consider given the situation in the Indian scenario.  Other than Ola and Uber, there are numerous other agencies in India who currently provide taxi services. These small scale operators will not have the means to afford the high premiums that the policies on self-driving cars demand. This will not only lead to a raise in unemployment but will also lead to the shutting down of a lot businesses.

In my opinion, the Government of India should come up with a model where these premiums are either subsidized or a model where there is a reduction in premium price to the consumer while ensuring that the insurance company also benefits. Else, with the arrival of this technology, we can expect to see a decline in the number of traditional insurance companies as well.

conclusion

The existing laws in India have some restrictions on the usage of certain frequency by civilians. This is the reason why most autonomous vehicles or the technology of self-driving has not made it to the country. However, the Union Minister for Transport, Nitin Gadkari has promised to make necessary arrangements for the availability of the frequencies required for the technologies to work flawlessly in India. While the imposition of liability is quite clear from the above analysis, the important questions to worry about are that of the impact of the economy and the implications on the insurance sector.
As far as suggestions are concerned, I would like to propose to the Government to come up with the following:
      1.     A scheme to regulate the autonomous and connected vehicles keeping in mind the liability model suggested above.
    2.     To come up with a solution to address the issue of unemployment this is a side effect of implementing this technology.
     3.     To establish a central operating station for the control and working of these vehicles with high security to prevent the threat of cyber terrorism.
     4.     To subsidize and provide incentives to the insurance sector and the manufactures/developers of this technology.
      5.     To improve upon the current driving situations.
      6.     To implement disciplined driving with a sense of maturity.
      7.     To provide for adequate road infrastructure to implement this technology.
If the above suggestions are considered, it is definitely possible to implement the technology of autonomous and connected vehicles. Implementing this technology will take us one step forward and put India up there with the most advanced countries of the world. Implementing this technology will also go a long way to fulfill the dream of a developed India as well.


[1] Robertson Bill, Science 101: How do Self-Driving Cars Work?, Vol. 54


[2] Hallevy Gabriel, The Basic Models of Criminal Liability of AI Systems and Outer Circles

[3] INTELLIGENCE UNDER CRIMINAL LAW, Northeastern University Press, University Press of New England (2013)

and GABRIEL HALLEVY, LIABILITY FOR CRIMES INVOLVING ARTIFICIAL INTELLIGENCE SYSTEMS, Springer-Verlag International Academic Press (2015)

[4] Johnson v. State, 142 Ala. 70, 38 So. 182 (1904)

[5] Morrisey v. State, 620 A.2d 207 (Del.1993)

[6] United States v. Powell, 929 F.2d 724 (D.C.Cir.1991).

[7] Francis Bowes Sayre, Criminal Responsibility for the Acts of Another, 43 HARV. L. REV. 689 (1930)

[8] GABRIEL HALLEVY, THE MATRIX OF DERIVATIVE CRIMINAL LIABILITY 241-247.

[9] United States v. Greer, 467 F.2d 1064 (7th  Cir.1972)

[10] Steven J. Frank, Tort Adjudication and the Emergence of Artificial Intelligence Software

[11] Taxi market to account for 15-17 per cent of Indian PV market by 2020 //economictimes.indiatimes.com/articleshow/57432718.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst


[12] Insurance for Autonomous Vehicles & Self Driving Cars – July 23rd 2019, https://foundershield.com/insurance-for-autonomous-vehicles-self-driving-cars/

[13] https://www.aaa.com

Leave a Comment