Krishna Deo Singh Chauhan is an Associate Professor at Jindal Global Law School, Sonipat and Anupriya Singh is an Independent Researcher.

Occam’s Razor and Liability for Self-driven cars
Introduction
Self-driven cars (or Autonomous Vehicles, hereinafter “AVs”) are being extensively tested by nearly every major car manufacturer in the world.[1] They promise to usher in a new era of mobility, conferring various advantages. Firstly, they are predicted to reduce the number of road accidents, saving thousands of lives every year. This is because not only are the systems driving these vehicles not prone to human follies (such as limited vision and susceptibility to distraction) but also that with the increasing number of AVs on the street, they can ‘speak’ to each other, making the network even safer. That, by itself, creates a substantial imperative for incentivization of faster development and adoption of the technology globally.[2]But in addition, it is also expected to save up to 7 billion litres of fuel yearly, a substantial reduction in the consumption of fossil fuels. Another hundred billion dollars will be saved in accident-related externalities and lost productivity as well.[3]
There has been significant improvement in technology over the past couple of decades. In 2004, when the US Defense Advanced Research Projects Agency (DARPA) organized the Grand Challenge for driverless cars, the level of failure was spectacular. The winning participant could complete less than five percent of the 150-mile course. But only ten years later, Google and other entrants in the AV space had successfully logged hundreds of millions of miles on streets.[4]As technology continues to improve, paving the way for steady, if not gradual, adoption of AVs on the streets, an essential set of obstacles remains; regulatory barriers. In particular, issues concerning such vehicles’ safety and restitution for physical harm caused by AVs form the most critical issues.
Many jurisdictions, such as multiple states in the US and other countries, have now made express provisions allowing AVs to be driven on the streets, along with a set of ex-ante measures that include testing and certification. But what about ascertainment of liability, ex-post?
Liability for damage
The question of liability leads to the most unanticipated and challenging doctrinal quandaries. Machine Learning based Artificial Intelligence software not only reduces the quotient of human agency in the actions of the AV on the street, but they also act as the Novus actus interveniens[5] for any consequences that follow.[6]
However, the doctrinal paraphernalia is not entirely out of tools. Several possible courses of action have been suggested. Firstly, one might hold the human driver responsible. Even today, most AVs require humans to remain behind the wheel and actively intervene whenever needed. If they fail to take the necessary action, it might amount to negligence in observing their duty of care. But this option works well only for cars with lower levels of automation. The Society of Automotive Engineers classifies vehicles into six gradients of increasing automation. While in the first three levels (levels 0-2), the driver is required to remain alert and act, Level 3 requires the driver to act only when alerted by the car to do so. Levels 4 and 5 require nearly no intervention.
It is clear then that as we move high up the automation ladder, the human driver in the car loses any real agency, and holding them accountable for negligence would not only be counterintuitive but will disincentivize the widespread purchase and adoption of cars.
Who, then, should be liable at these levels? It appears that the capability to direct the actions increasingly rests with the car manufacturer even if through indirect design and programming. This leads to the second widely discussed possibility; to hold the car manufacturers liable under the law of product liability.[7]Product liability, with its origins in the late twentieth century, was crafted in response to mass industrialization and the production of goods. Product liability is a form of strict liability that makes manufacturers liable for any defect in the design, manufacture or information provided with the product.[8]
However, product liability too presents its challenges in the case of AVs. Firstly, establishing manufacturing and design defects is full of conceptual pitfalls. For instance, when accidents occur, should the determination of defects be done in the context of human driving (an error that no human driver would commit) or driving by algorithms (an error that no other AV is accepted as the standard in the industry) would commit? Should it be considered a manufacturing defect when the algorithm evolves substantially beyond what was initially manufactured? What standards of information are required to be provided to human passengers when they are nearly out of the loop of the decision-making?[9]
Questions such as these make product liability a less-than-ideal choice for determining liability for AV accidents. They keep the regulatory landscape muddled and hazy. Successful product liability claims are also known to lead to significant damages being awarded, which is a further powerful disincentive for investment in this market.[10]
Occam’s Razor for liability
A useful way of thinking about the liability issue, which protects individuals from harm without disincentivizing the industry, can be found in the old philosophical wisdom of Occam’s Razor. It provides that “if you have two competing ideas to explain the same phenomenon, you should prefer the simpler one”. Adapted to the legal context, it can be reformulated as “if you have competing regulatory models, you should prefer the simpler one.”
The primary reason such simplicity may be a valuable strategy is its potential to avoid regulatory shock. No industry takes kindly to taxing regulations or major regulatory overhauls and shocks. Primarily when such a strategy also protects the interests of the consumers equally well, the approach works best to incentivize the adoption of beneficial new technology.
Nathan Greenblatt suggests a strategy that fits this description in the context of liability for damage caused by AV. He notes – “Damages imposed on the carmaker (which is responsible for the computer driver’s actions) [should] be equal to the damages that would be imposed on a human driver”.[11]He notes that this measure protects the victims’ interests in a manner similar to how they are protected in the present world of predominantly manual driving. It also makes it easier for insurers to design the premium framework. And while car manufacturers pay insurance premiums instead of the car owners, the resultant clarity still makes it a better deal for the industry.[12]
Conclusion – Relevance for the Indian context
While India appears far from realizing the AV dream, tides are slowly turning. Not only are start-ups entering the space, but even the government has also started several initiatives in this direction. In a growing country with worsening traffic issues such as India, AVs can go a long way in making travel safer, cheaper, and faster. In such a circumstance, regulation should not become a barrier to the maximum adoption of technology. Greenblatt’s strategy might thus be a helpful tool to think about the regulation of AVs in India and even globally.
[1]Vikram Bhargava and Tae Wan Kim, “Autonomous Vehicles and Moral Uncertainty,” Robot Ethic 2.0 (Oxford University Press 2017).
[2] Kyle Colonna, ‘Autonomous Cars and Tort Liability (2012) 4 Case W Res JL Tech & Internet 81, 114
[3]Nathan A Greenblatt, “Self-Driving Cars and the Law” (2016) 53 IEEE spectrum 46, 50.
[4]Erik Brynjolfsson and Andrew Mcafee, The Second Machine Age (W W Norton & Company 2016).
[5] ‘Novus actus interveniens’ means ‘new act intervening’.In an action for negligence, it is an intervening act which breaks the chain of causation linking the plaintiff’s harm to the defendant’s action. This leads to a failure on part of the plaintiff to establish causation in law.
[6]Jacob Turner, Robot Rules: Regulating Artificial Intelligence (1st edn, Palgrave Macmillan 2019) 61.
[7]Colonna (n 2), 117.
[8]Turner (n 6), 93
[9]David C Vladeck, “Machines Without Principals: Liability Rules and Artificial Intelligence” (2014) 89 Washington Law Review 117.
[10]Colonna (n 2), 102
[11] Greenblatt (n 3).
[12]Ibid.