Contracts to Shift Liability in AI
Author: Pranshu Chaudhary,
II year, III Semester,
Dharmashastra National Law University.
II year, III Semester,
Dharmashastra National Law University.
Throughout all the years of technological advancements, we have seen the advancements from the telegram to Artificial Superhuman intelligence (ASI). It has the quality to learn, understand and draw a pattern based on the behaviour of the user. It also can decide and take certain actions based on the emotions codified in them. This quality of ASI makes them more or less human. But the only problem is Humans can enter into a contract and can be penalised in case of criminal liability. Robots on the other hand, cannot be penalised. And their capacity to contract is the issue this research paper would try to answer. The paper would try to focus on enforcement of these contracts and how it can be used to shift the liability of the buyer of the AI. Contract on buying and selling of the General AI, on the other hand is a simpler concept to understand but the contract with Artificial General Intelligence, which does not exist as some tangible document, but is just a virtual Contact based on the interface. This research aims at finding the validity of such contracts.
Understanding Artificial Intelligence:
Artificial Intelligence is not actually intelligence but is an interface with coded software through an object in a physical existence (hardware). There are various types of AI based on the functions they can perform, which are as follows:
- REACTIVE MACHINES: These types of AI systems are purely reactive, and do not have the ability to form memories or to use past experiences to inform or predict current decisions. The example can be a Calculator.
- LIMITED MEMORY: This Type contains machines that can look into the past. Self-driving cars do some of this already. Smartphone for that matter can be the perfect example of this Artificial Intelligence.
- THEORY OF MIND: Machines in the next class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is termed as theory of mind – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behaviour. Technically, they are called Artificial General intelligence (AGI).
- SELF-AWARENESS: This is an extension of the “theory of mind” AI. Consciousness is also called “self-awareness” for a reason. Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. Without a theory of mind, we could not make those sorts of inferences. This stage of Artificial Intelligence is called Artificial Superhuman Intelligence (ASI).
Present Stand of Contract
s on Artificial Intelligence
s on Artificial Intelligence
When we buy any AI device, we have a contract with the manufacturer which is non-negotiable in nature. But they do not just work like a Standard form of Contract as even the manufacturer cannot decide the clauses of service the AI is going to provide to the buyer. So, we can say that the manufacturer is person against whom the clause can be enforced. And the matter like this are dealt in Consumer Forums, under Consumer Protection Act, 1986. But here there is another contract that is unnoticed by the law of any Legal System i.e. the contract between the buyer and the AI, which appears more like a nonnegotiable Service Agreement, which is not considered as a valid contract in any legal system throughout the globe.
Liability in case of Artificial Intelligence
A book written by Ugo Pagello called ‘The Laws of Robots’ discussed in a chapter, the contractual Liability of Artificial Intelligence.
In Section 11 of Indian Contract Act, one of the competencies required to enter into a contract is the Soundness of mind, and the mind in AI can be inferred in the third class AI. So, they are incompetent to enter into a contract. Sophia, the only third class AI till now was also declared as the citizen of UAE and hence, she is a person (legal, at least). How, in this case, can we enforce contract against these AI? But, Artificial Intelligence does not have independent existence. They have an owner. And the relation between the Owner and AI is like the Principal and agent. So, the owner of AI would be vicariously liable for all the acts done by AI (all criminal, tortuous and contractual liability). But when the AI is acting out of the Agency, in its individual capacity, who would be held liable for those actions?
Here, the no fault liability theory comes into picture of Contract. When we buy any device with Artificial intelligence, we do not just enter into a contract of buying the goods, but we also take an under-taking of Liability of the misconduct of the Artificial Intelligence Device. The warrantee period of the device is the time period when the strict liability is owned by the manufacturer. The activities of the AI can cause financial ruin to the liability-holder, that too when, they cannot even decide the terms of contract. This contract is based on the capability of the Artificial Intelligence which the manufacturer can codify but cannot bind the AI to do anything not codified in it just because of the terms of contract with the buyer.
For example, a III level AI (Robot) is employed in a company and during the working hours and the supervisor pats AI and as a reflex action, the robot hit the supervisor, by which he dies. Here, under the present contractual arrangements, the liability must be well stated. If it is in the warrantee period, the act would be under the strict liability of the manufacturer and if not, the conventional way of deciding the liability would be the strict liability against the possessor. The AI can think and analyse like human but still for its act. It cannot be bound under any jurisdiction as it cannot be compensated against or penalised. So, here what we can check that whether it is in the course of employment or any other such liability the owner binds himself/ herself with. But, if the act is not within the course of employment and out of the terms decided upon in the contract between the manufacturer and the buyer, then holding the owner liable is against the concept of justice. This strict liability arising in this relation is due to legal vacuum or ambiguity. And, as discussed in the book ‘Robotics, AI and Future of Law’ by Corrals, Marcelo Mark, Fargo and Nikolaus Forgo; and ‘Laws of Robots’ by Ugo Pagello, the only way to decide it is to specify in the liability in the contract of buying AI.
The Virtual Contract and its Enforcement
The contract of the Artificial Intelligence to serve the owner is the virtual contract which is nonnegotiable in nature as we cannot decide what service would the device avail us and the other terms and conditions, as they are the optimum capacity of the AI. It is a contract without any existence, either in form of e-contract or conventional contract. And hence enforcement of such a contract is a matter of concern.
To bring contract like this into existence we try to cover the terms in the purchase contract between manufacturer and the buyer of AI. And there, the strict liability is clearly stated. Warrantee period is nothing but the time period till which the manufacturer takes the liability and after that the strict Liability of the owner starts.
Until the contract is about the Lower AI, this setup has no flaw in it, as the lower AI is in Human control. But this contract in the terms of Higher AI would be arbitrary in nature, as they can decide for themselves, their actions. And when, these AI can decide as human, deciding the strict Liability of their conduct by manufacturer or Owner is not just as they cannot foresee or limit th
But the ambiguity of the law still remains, which is not fair for the Owner or as said by Pagello, “the activities of the AI can cause financial ruin to the liability-holder”. So how can this ambiguity be resolved?
Now, here comes into the picture, the importance of virtual contract, the contract between AI and the owner. This contract, till this stage of AI, had no specific role to play, but now the AI has an ability to think and decide for itself, so it must hold its liability like a normal citizen. And not punishing anyone for the action of AI would be a threat to morality and social order. So we can bring that virtual contract into picture, which was silent in the backward AI. This contract between the owner and the AI can contain clause exempting the owner from Liability of action done by AI. If the action done is due to fault of the codes codified in the AI, the manufacturer must be held liable. But, if it is a conduct arising out of normal codification of either emotions or decision making, the liability must be that of AI, as act without Mens Rea should not be punishable by Law. So here, we can carve out other two relations in the selling process: the virtual contract between seller and the AI; and the manufacturer and the AI. This contract must be entered by the parties with free consent and should not have unjust clauses of Liability sharing.
But how can an AI pay the compensation or penalised?
So, at this question, the jurisprudence on AI and researches come to a halt and the traditional notion prevails, as not punishing anyone would be a threat to Social order. So, all contracts can do is to decide that who would be held liable, manufacturer of the AI or the owner.
But, we can have an alternative way through. We can just hold the AI to serve the party it has caused damage to, to get the compensation and damages for the time as the court may decide. For Criminal Liabilities, we can either render compensation (for what we established Tort Laws in 1206 AD), whereas in case of heinous crime or crime with a serious nature, we can have provision to permanently disable the Artificial Intelligence, as the court may thinks fit. But why would this is seen as a severe punishment? The answer is quite simple. AI of Third and Fourth Classes see this punishment as death as they think and feel like human. So, the owner and AI brings in the contract (which was virtual till it was brought into written form signed by Owner and AI) into existence. The ambiguity of Law can be resolved and the court may decide based on the term of the contract with AI and not just apply the conventional principles of strict liability against the owner or the manufacturer when the act was out of the directions or vicarious liability.
Since, the liability in case of suit filed against Artificial intelligence is a situation of complete legal vacuum. So, we can decide the liability among the contract between AI and the owner, which would obviously have a legal enforceability. So, the question that might arise out of the issue is the competency to contract. Section 11 of Indian Contract act, 1872 provides with certain essentials for competencies which are age of majority, sound mind and not disqualified by law. AI above the II class have codified thinking Ability in them called ‘the theory of mind’. So, they (in working state) can be called as in a ‘sound mind’. A sound mind with capability and knowledge for deciding can be called as satisfying the criteria of Majority. And as per recent judicial trend in criminal cases, maturity matters more than the age. So, age of majority might not be a material fact to be discussed in case of capacity of AI to contract. There is no Law on AI till date, so, no law is restricting AI from contracting.
But, without drafting this contract, would there be any problem?
One of the purposes of contract is to predict. So, defining every liability in the contract is a way of predicting the judicial decision. This way we can just fasten the judicial decision making system. Maybe, the fourth stage of progressive society’s growth is to create predictability in the legal vacuum as discussed by Sir Henry Maine. In American Legal System, the contract can supersede the laws, which is restricted by Section 24 of ICA, 1872. So, the contract may have certain restrictions while drafting, but yes it can be enforced, but getting enforced in a virtual form is ambiguous and not predictable. So this contract needs to be drafted into the formal legal document to be enforced and recognised.
 Misha Ketchell, Understanding the four types of AI, from reactive robots to self-aware beings, THE CONVERSATION, (November 14, 2016 12.40pm), https://theconversation.com/understanding-the-four-types-of-ai-from-reactive-robots-to-self-aware-beings-67616