Ahsnat Mokarim & Aparna Bhatnagar are the students of  Maharashtra National Law University Mumbai

I.The Gap Between Law And AI

In 2018, a milestone in India’s history of technological advancement was achieved when leading cardiologist and Padma Shri awardee, Tejas Patel, performed the world’s first-in-human telerobotic coronary intervention on a patient nearly 32 kilometres away. Let us assume that there arises a misfortune and the surgery fails due to an unanticipated malfunction of the robot that leads to the death of the patient, would it make the robot criminally liable for the death of the patient? Or, would the doctor be held liable for medical negligence? Or would the creator of such a robot be held liable even though such a malfunction was not anticipated (i.e., no element of mens rea)? On the other hand, if the robot is held to be criminally liable, what kind of defences would it be entitled to avail? For example, can a faulty robot, like humans, claim a defence of insanity or intoxication where it has been infected with viruses? As a matter of fact, there have been cases where people charged with computer-related offences have successfully argued that their machines had been infected with malware that was instead responsible for the crime.

This article examines the demand for legal personhood for artificial intelligence (hereinafter referred to as ‘AI’) systems, highlighting existing legal gaps and ensuing complications. It dissects arguments from both sides within the scope of liability and abstraction, advocating for distinguishing between legal ‘personhood’ and ‘agenthood’. Ultimately, it concludes that granting agenthood to AI emerges as the most viable approach to regulate AI and tackle related challenges.

II. Does The Recognition Of AI As A ‘Legal Person’ Lead To Desirable Results?

A proposed solution to the aforementioned dilemmas requires that these automated machines be recognised in the eyes of the law. This leads us to the debate surrounding the granting of legal personhood to AI systems, and whether this demand is, in fact, desirable. Support for this argument is rooted in the growing need to attribute liability in such cases. Although it may seem inconceivable to confer legal status to artificial agents as they do not fall within the traditional definitions of ‘human’, this demand finds support from the decisions of various countries to grant recognition to corporations, animals, environmental features and even idols as legal persons. Commenting on the evolution of law and technology in the coming years, Lawrence B. Solum observes that “our concept of a person may change in a way that creates a cleavage between human and person. Our current linguistic practice will not be binding in the imagined future”.

While the demand for legal personhood of AI is crucial to ensure accountability from the unforeseen, it is important to address the feasibility of the idea of assigning culpability to the robots being misused by humans who could use their AIs to evade punishments (i.e., using robots as liability shields). Presently, AIs are considered to be the property of their owner, thereby making them responsible for any possible liabilities. However, granting legal identity to AI systems would also grant the creators leniency to be bolder (or worse, negligent) with their creations.

Another consideration for the idea of granting legal personhood to AIs is the implementation of the punishments for any wrongful acts. While the criminal justice system rests on the principles of deterrence, retribution and rehabilitation, the same cannot be made applicable to machines at least till the time they develop any human-like consciousness. Even if we reach that level of advancement, scholars argue that there shall always be a “missing-something” element in AIs that could never resemble any “human-like consciousness”. Horst Eidenmuller proposes that granting legal personality to artificial intelligence systems could open up the potential for novel sanctions, such as “revoking the system’s legal status, temporarily detaining it, or even destroying it.” However, what constitutes deterrence to the AI and how to punish them are questions still too complex to answer at this juncture.

A noteworthy contention opposing the granting of personhood to AI posits that such an act would endow artificial intelligence with rights and responsibilities akin to those exclusive to human beings. This argument underscores the inherent role of AI as a tool designed to aid humans rather than placing it on an equal pedestal with them. This aligns with the refusal to grant copyright protection to artworks generated by AI, as illustrated by a recent case involving paintings produced by a specialized software called RAGHAV, which was contested due to its lack of human authorship.

III. Are There Other Viable Alternatives To The Demand For The Personhood Of AI?

In spite of the benefits pronounced by granting legal personhood to the AI, the concerns and challenges in implementing this are far too complicated to bring any fruitful results. Exploring alternatives, thus, becomes imperative to address issues arising from the non-recognition of AIs without triggering the challenges resulting from granting electronic personhood.

Uga Pagallo, in his book The Laws of Robots, has suggested a threefold level of abstraction to be considered by policymakers while addressing the issues related to granting personhood to robots and smart AI systems which are as follows. Firstly, it evaluates AI’s legal personhood within the constitutional framework, akin to the pre-personhood existence of entities like the European Union. Secondly, it scrutinises the legal liabilities of AI systems in contractual dealings and business transactions. Lastly, it investigates the extra-contractual responsibilities of AI activities, specifically concerning scenarios like product defects and manufacturer liability under tort law. This systematic approach offers policymakers a structured path to grapple with the complexities of AI’s legal status without immediate reliance on granting full legal personhood to AI entities.

Among these scenarios, the second idea introduces the concept of legal agency (as an alternative to legal personhood) to the AIs. This has appealed to several scholars as it allows legal accountability to the AIs while evading the issues caused by granting legal personhood. Pagallo further highlights that the debate surrounding the legal status of AIs essentially suffers from the lack of distinction between the concepts of legal personhood and legal agenthood. The point of differentiation between the two is dependent on the degree of liability. While legal personhood contemplates the culpability of AI systems as independent of its creator, thereby attributing complete liability, legal agency grants limited liability in proportion to the system’s involvement.

In a nutshell, it may be said that granting legal agency narrows the scope of rights and obligations when compared to personhood. Since AI technologies are getting increasingly complex and developing cognitive abilities of their own, problems in identifying mens rea arise. This makes ascribing culpability a murky exercise. Legally binding agenthood contracts which assign specific rights and responsibilities to AI technologies could make the ascertaining of liability a much smoother process. For example, maintaining registries for AI agents, acquiring insurance coverage for AI, and imposing strict liability standards for AI owners have been proposed to deal with liabilities arising in civil cases. It has also been opined that granting AI technologies legal agenthood could serve as a good “practice run”, serving as a step towards exploring the evolution of post-human legal personality.

IV. The Way Forward

It is often said that as the ever-evolving technology sprints forward, the law lags behind. The proposition to give legal recognition to AI would bring these technologies under the scope of legal scrutiny. Viewed from this perspective, the demand to grant legal status to AI is indubitably legitimate. However, as discussed throughout the article, the present lacuna regarding the legal personhood of AI systems only serves to dissipate liability and add to the confusion regarding accountability in today’s technology-driven world. On the contrary, granting agenthood circumvents the problems that arise with the granting of full-fledged personhood to AI, while simultaneously ensuring that a certain degree of liability is assigned to it.

Even though granting agenthood is not a panacea for all the emerging issues associated with AI innovations, it certainly provides a head start for us to catch up with the possible conflicts that may arise with our increased interaction with it. As remarked by Tanel Kerikmäe (et al.), “The idea of granting very specific areas of legal responsibility for the AI could allow us to start experimenting with it, to see where the weak points in the technology and legal systems lay.” In essence, the move towards agenthood establishes a nuanced approach, fostering a more balanced and accountable interaction with AI, thus facilitating a proactive stance in navigating the complexities that lie ahead.

Share this post