Beyond the Personhood of A.I. Entities

 Beyond the Personhood of A.I. Entities

  
Author-
-P Mallikarjun,
 3rd year,
School of Law, CHRIST(Deemed to be University).

Abstract
The purpose of this research paper is to analyse the status of machines that possess artificial intelligence if perceived as persons recognised by law in India and in various other countries. Through this research paper, the researcher aims to observe and understand the position of such entities in the legal framework and also try to see, if such entities were to possess a position in our legal system, and how various rights and liabilities can be assigned or attributed. Thus, this paper aims to help provide a better understanding as to the current and possible future legal status of such “Artificially Intelligent Entities”, and the scope of the future evolution of such laws. The researcher also aims at understanding the manner of granting of rights to such entities and the manner for redressal of disputes arising out of such rights so granted and its violation. The researcher also wishes to analyse the process of how to and on whom to fix the liabilities in the event of disputes arising relating to the AI entities. Further the researcher wishes to visualise the kind of penalties which can be imposed, such as decommissioning of such AI entities etc and circumstances leading to the same. To further analyse the rights, duties and liabilities of the programmers, developers, marketers and owners of such AI entities. The researcher further aims to explore the rights and duties of those end users who may exploit such AI Entities and instances where the such AI entities may malfunction and go out of control. Finally, the researcher wishes to provide an in-depth critical analysis to provide the reader to have an understanding on the current and possible evolution of laws relating to Artificial Intelligence.
Keywords-
      ·       Artificial intelligence
      ·       Artificially Intelligent Entities
      ·       Decommissioning
      ·       Machines
      ·       Rights and Liabilities

Introduction

Before dwelling into the topic of personhood of Artificially Intelligent Entities, it becomes crucial to first analyse the definition of Artificial Intelligence. The term Artificial Intelligence (henceforth referred to as A.I. for brevity) is generally used to refer to such man-made machines or programs that possess the ability to perceive its surroundings and take decisions accordingly.
“The science and engineering of making intelligent machines”
-John McCarthy

Thus, it can be said that the term A.I. can be used to describe such machines and programs that, in layman terms, have the ability to think independently, similar or almost similar to humans. The very nature of such programs lies in their coding which have been so structured to enable the machine to analyse situations and perform such actions as may be necessary to yield the best possible outcome. Through an analysis of the modern-day scenario, it can be inferred that the speed of advancements in the field of A.I. is phenomenal and the day where a sentient artificially intelligent being is created is also within reach. Thus, we arrive at the main points of discussion of this paper, i.e., assuming that machines achieve such degrees of sentience, what would be their position in society and also the status of the rights and liabilities of such entities. But, before the said can be discussed upon, the very personhood of such “Artificially Intelligent Entities” is a subject matter to be clarified.

With regards to the subject of personhood, the word person has been interpreted in a plethora of manners, by various jurists and thinkers who sought to define the said word which was and has been ever so closely associated with the legal status of individuals of a society. In the early stages, thinkers such as Savigny associated the term purely with regard to the very fact of holding rights, i.e., one would be a person by virtue of him/her being a bearer of rights. With the passage of time, the definition evolved, owing to thinkers such as Holland, who began to think on the aspect of duties, i.e., if one were to have rights to something against someone, then another too would be in a similar position against someone. Salmond’s definition as compared to the earlier definitions, provided the most clarity on the subject. In his words: –
“So far as legal theory is concerned, a person is any being whom the law regards as capable of rights and duties. Any being that is so capable is a person, whether a human being or not, and no being that is not so capable is a person even though he be a man.”[1]

There were also several German jurists who associated personhood to the idea of personality of individuals, that is the ability of individual to think, question and so on with regards to such rights and duties imposed on them. With the above definitions in mind, the question of bestowal of status of personhood to entities that possess A.I. comes into picture.

The very nature of A.I. Entities, is that such machines and/or programs have been designed in such a manner as to learn and improve on tasks so assigned to it and also, in most cases, to mimic the thinking capabilities and actions so performed by those who brought the said into existence, i.e., us, the humans. Thus, through various advancements in the field of science and technology and also breakthroughs that will be made in the foreseeable future, the functioning and ‘thinking’ capabilities of such entities would evolve from the mere modern-day stimulus provided by us humans to a level of self-capable, or even sentient, level of functioning and ‘thinking’. Thus, in light of various definitions of personhood by eminent thinkers such as Salmond, and owing to the very nature of A.I., the association of personhood to such entities is not a hypothetical presumption of the far-off future but a matter of national policy making in the near future, as bestowal of personhood to such entities would be apt in a modern society such as ours, where, with the passage of time, science and technology would have advanced and further will continue to do so until it arrives at the stage of self-sufficient sentience, i.e., till the point such machines are able to think for themselves. Thus, by virtue of that very fact, if such machines are able to think for themselves, then that would also imply that they would also possess the ability to hold/bear rights and even fulfil duties and thus would be granted the status of persons in societ
y, at least in accordance with the various theories.

Jurisprudential analysis of Rights and Liabilities of such Entities

With the association of personhood towards such entities having been dealt with, we come to the aspect of assigning rights and liabilities for such entities. As seen above, the very essence of association of status of ‘persons’ for anyone/anything, is closely, if not fundamentally, interrelated to the power of the subject to bear rights and duties. Aside from the ability of such entities to bear rights and liabilities, the manner and the types of rights and liabilities that would be bestowed upon such entities becomes a topic for clarification. Another aspect that becomes of great significance and philosophical in approach is whether there would exist a need to bestow any rights to such A.I. entities given that a great majority of the poorer sections of the world’s population are not even provided the simplest and most basic of rights such as a right to food, water etc. In light of the said, there exists the colossal possibility of an uprising by the poorer sections of society when they find that whilst they suffer in pain and agony over not being able to receive a single, proper meal a day, various machines, that most would consider lifeless/non-living, are being given more importance, preference and rights than them, who are alive and struggling to survive.

The questions that comes up, when dealing with rights and liabilities of A.I. entities, are such as, what are the rights to be given to such entities? How can they be given? Would such entities be able to seek redressal if their rights are violated? Would such entities understand the responsibilities associated with being given such rights? And so on. When we discuss the subject regarding what rights are to be given to such entities. Given the plethora of rights available to us, humans, by virtue of us being persons, all such rights so granted to us ought not to be granted to A.I. entities as some interpretational difficulties may arise. For example, there would be a difficulty in granting a ‘Right to Life’, as provided for under Article 21 of the Indian Constitution, as the fundamental question of whether such A.I. entities are alive to begin with. Given that such A.I. may not possess an actual body in most cases as they are programs and in other cases, are machines powered by batteries and are not birthed by other beings but made by other beings. Closely related to the same, if there is an invasion into the system/programming of such A.I., i.e., the A.I. being hacked, would the said act be considered as a violation of their right to life. Though the said aspect shall be dealt with in further detail later, it, the act of being hacked, can be considered a violation of a Right to Life of such entities as the programming and coding of such A.I. would be equivalent to the organs of a human. Similarly, a cyber-attack on the programming and/or the coding of the A.I. entity would be similar to the above example. It is also to be noted, with regard to the modern day scenario, given that A.I. would only function within the ambit of the activities it was programmed to perform, the machine may not even possess the remotest of an idea that it possess a right of any sort, unless it was programmed to do so, let alone attempt to seek redressal against the violation of its rights, which, though it can be programmed into its coding, may fail to factor in the aspects of real society where violation of rights may take place in unforeseeable manners.

Setting aside so broad a right as Right to Life, let us move to a smaller subject matter, namely the Right to freedom of speech and expression and Right to trade and livelihood. Given that an Artificially Intelligent entity are but, in the simplest of terms, computer programs that possess artificial thinking, they would not be able to feel emotions nor understand the same, given that modern technology is able to code emotions into programs, though it would cause a major number of misunderstandings between humans and such entities, one should never forget that even these emotions are just mimicked by the AL and are not real feelings.

There is also the aspect of ‘Right to Vote’, as given that having the status of person would also entail citizenship of such entities in their respective countries, and thus being a citizen of a country, it would be the duty of the A.I. entity to cast a vote to elect the leader of the country. It is there that the real dilemma arises, as with regard to the criteria upon which it bases its vote so cast. Through our understanding of the functioning of such A.I. entities, two conclusions can be drawn which would influence the actions so performed by the entity, the said being:

    1.     The actions so performed by the A.I. entity is based upon the coding/programming of the same, and,
     2.     The essence/ nature of most to all A.I. machines that is to achieve the optimal out come it was so created to achieve in the most efficient and effective manner so necessary.

In light of the above two conclusions, it can be interpreted that a decision on whom to vote for, by the A.I. would be heavily influenced by external factors. With regard to the first conclusion, this give rise to a situation where the decision on whom to vote for, may have been pre-programed into the coding of the A.I. causing the entity to vote for whom the creator of the machine wants to vote for. The second conclusion, though would result in the decision of the A.I. entity to be based heavily, if not completely, on logical derivations based upon the promises so made by the political parties and their representatives, there lies the fact that in most cases, these promises are but mere false promises and empty propaganda. Given that such entities would make decisions based upon existing records of past political performance and statistical data of the political party and that such entities, which do not have the ability to feel emotions and may not be able to look beyond the falsehood surrounding words of some politicians would thus cast a vote purely on the fact of statistical data, past records and promises made and so on which though is done with the intent of achieving an optimal outcome to them, may end up not being optimal in the end.

Moving on from the aspects of rights to liabilities, there are two key aspects that come into picture when dealing with imposition of liabilities on A.I. entities, the said being:
      1.     On whom the liability is to be imposed, i.e., whether liability is to be imposed on the A.I. entity itself or on the creator who programmed the A.I. entity, and,
  2.     The manner in which the said liability id to be enforced, i.e., the degree of punishment/sentence/decommissioning and so on.

Before discussing the above, it is to be understood that though the above two aspects are written separately, they are to be read and analysed together. Thus, moving forward, the subject matter of greatest importance is that of ‘On whom the liability be imposed’, where the main issue is whether the liability be imposed on the creator of the A.I. entity/machine or on the entity/machine itself. The said dilemma arises as on one hand, the machine has caused hinderance in the exercise/enjoyment of a right by a person and hence is to be held liable, and on the other hand, it is due to the way in which the said machine/A.I. entity is programmed, by its creator, that the said event has occurred, as the A.I. performs any and all of its actions in accordance with the directives so coded into it. To best explain the same, the example of A.I. entities being used in security forces is taken, where, A.I. entities, being an “intelligent machine” programmed to perform and accomplish tasks easily and instantly, in accordance with the way they are so programmed, may fail in instances of crucial importance if, the A.I. entity being a machine with a computer software as its brain, were to malfunction. Now, assuming that there are A.I. unmanned vehicles (fully loaded with various kinds of weaponry) in a high alert terrorist area, but, as in most cases, there are innocent people as well and these machines are programmed to shoot those who are in possession of a weapon or pose a threat in any other manner. Now assuming that as a result of wrong conclusion arrived from data provided or some software malfunction or the like, the A.I. coding gets corrupted and loses its ability to differentiate weapons from household silverware. Now assume, that a woman comes out with a kitchen knife to cut some vegetables from her garden and the robot shoots the lady, failing to distinguish the knife from an assault weapon (military knife). Thus, the machine has ended up killing an innocent citizen, as the machine has failed to understand the knife like how it was supposed to. Now we arrive at the real dilemma, i.e., whether the creator of the A.I. software, who programmed the same and failed to create failsafe protocols in case of such instances, is to be held liable or whether A.I. entity is to be held liable. It is to be noted that the above scenario is more relevant to the modern-day scenario, but the question would be more relevant in future circumstances where the same machines may attain sentience / are self-aware. On analysing both aspects, it becomes a more complex issue of liability, given that external factors may have a role to play in malfunctions as well, but ultimately, the said liability ought to fall on the creator of the A.I. factors owing to three key points that can be considered vital whilst imposing any punishment, namely:
      1.     To be able to punish and thus prevent the party in question from repeating the same in the future,
      2.     To be able to make the party so accused aware of his wrong doings and thus aim at rehabilitation, and,
      3.     Deterring effect on other members of society[2]

The above said key points though would be meaningful to us humans, would not be as relevant to A.I. entities as such entities possess machine bodies which do not possess bodies that can feel pain nor emotions such as sorrow, regret, remorse and so on and thus, the very essence of punishment would not be fulfilled. With regard to the above factors, starting from the first aspect, that is, ‘To be able to punish and thus prevent the party in question from repeating the same in the future’, given that A.I. entities do not feel regret, remorse or pain and thus the primary objective, i.e., to punish and prevent such acts in future by the accused, would fail and thus would be pointless. But the same would be effective on a human creator as us humans as we would feel emotions unlike machines and would thus regret our actions and in most cases, we tend to refrain from those actions that would cause us pain, similar to a situation of operate conditioning[3]. Moving forward, the second aspect, unlike the first, can only be implemented on the creator and not the A.I. entity itself, as the very meaning of the term ‘rehabilitation’, i.e., to restore one to their normal life through therapy and training after thein imprisonment(or during the same)[4], and the very essence of the second aspect are of such nature that they may only be implemented, at least in this case, after the first aspect, i.e., ‘punishment’, has been imposed on the subject. Aside from the said, rehabilitation of an A.I. entity may also be possible if its programming were to be corrected and further fail-safe protocols be implemented. Though there exists the scenario where the A.I. entity, by itself, rectifies its own programming, the same may, in most cases, be ineffective as, to a machine, the purpose of punishment may not be something it understands, and would thus be merely be making corrections to its own programming, by virtue of commands from its superior officer, whilst not analysing the wider picture of committing a crime. Finally, with regards to the third aspect, i.e., the deterring effect of punishments, as against a set of acts, on other members of society, the same may be the only one of the three aspects that may directly be applied to various A.I. entities if such entities had a common creator or share a common network that links their programming wherein, the changes made to the programming of one entity would automatically be applied on all entities made by the same creator (assuming that a line of A.I. entities functioned on the same program, or the A.I. technology were to be monopolised by one single manufacturer). But in a scenario where each A.I. entity has their own unique program, then the said aspect in discussion would also assume a state similar to the two prior aspects, where the creators and programmers would be deterred from committing the same crime/ made aware of the mistake of one and not make the same. Alas, we come to the conclusion form the above that the most effective course to be taken to whilst imposing liability and punishment would be towards the creator/programmer in light of the points raised in the above instance. That said, the above is only with regard to faults in programming. This aside, now we shall delve into the aspects of imposing liability or rather degree of liability to be imposed on the ‘owners’ of such A.I. entities and also the liability of people who ‘Hack’ or ‘Attempt to Hack’ such A.I. entities.

Thus, we come to the degree of liability to be imposed on such individuals who owns the A.I. entities, and misuse the same for the commission of various crimes and other heinous offence, given the capabilities that such A.I. entities may possess. Before moving further into the said subject, we assume the premise that such entities that can be indeed be owned by people, if the nature of the entity is such. Moving forward, the above stated ‘degree of liability’ to be so determined, may be decided similar to the liability that may be imposed in a principle-agent or a master-servant relationship, i.e., similar but not exactly like vicarious liability, in this case as the said A.I. entity, having been programmed to obey commands as issued to it by its owner/master, would act exactly as it were so ordered to by its master, as is the nature of such entities given their ‘mechanical origin’.
But to allow for there to occur a situation wherein such entities could be misused so would/could be blamed on the programming and thus, the programmer. But in order to move forward in to the realm of infinite possibilities, let us assume that the programmer, at least in this case, has taken all necessary precautions that a man of normal prudence, in such a scenario, would be expected to take, and that the owner has some how found a manner to misuse the said. Thus, the liability would ipso facto fall upon the owner owing to the very fact that it was indeed the owner who caused the A.I. entity to inflict injury upon someone else, as the A.I. in this case would be no more than a mere tool/weapon at the hands of its owner.

With regards to the subject of liability to be imposed on those, who hack or attempt to hack such A.I. entities, before moving forward, we assume two basic premises so as to make the analysis of liability possible: –
       1.     That the programmer has taken all necessary precautions necessary, i.e., has put in place all necessary firewalls, failsafe protocols and so on[5].
       2.     That the A.I. entities have been granted a right to life, and other such rights as may be granted to a human being.
Thus, we arrive at a stage where we are able to analyse what may be the greatest threat to A.I. entities or even future society on the whole. With regards to the individuals that modern day society refers to as ‘Hacker’ who possess the skill set to infiltrate various kinds of programming so as to exploit and use the same to their advantage. In our age and era where we live in a world where even the most minute of transactions are related or linked to technology in either one form or another, one may consider a hacker to be one of, if not the greatest of any, threats that may be faced. With regards to the case of A.I. entities, we come to the simple conclusion that it is of crucial importance to determine the necessary liability on such hackers who may cause the malfunction or misuse of such entities. With regards to such individuals who hack A.I. entities to render them unable to perform their necessary tasks or permanently cause their software to be corrupted beyond repair, whilst going along the lines of the premises as given above, such an act would be akin to causing the ‘death’ of the entity and thus can be considered to have committed a crime of a similar degree of that of murder. Thus, liabilities of such hackers ought be to such degree so as to ensure the security of not only such entities, but also the society at large given that if such an entity were to malfunction and go out of control (if it were military grade or belonged to other similar security forces), would cause mass destruction and havoc among members of society.

Conclusion
Thus, we arrive at the conclusion that if such A.I. entities were to be given rights akin to that of a human, the essence of such rights and the manner in which the are implemented would be drastically different and would require constant changes based upon the growth and evolution of such entities. It can also be concluded that the very subject of giving rights to such rights may not be facilitated in some countries given that in most cases, it is only us humans that are the subject of such rights. In furtherance of the above, with regards to imposition of liability upon creators and programmers and so on, all arguments that can be made with regards to the said can be deconstructed to come to the common crux of all, or at least most, arguments, i.e., with regards to the need/lack of safety measures and counter measures that ought to have been in place. With regards to hackers and crackers[6], the liability of such individuals needs to be imposed in the strictest sense as the threats and danger that such individuals pose can be considered as the greatest hurdle and/or challenge. Finally, with regards to owners of such A.I. entities, in the end all that can be said is that, “A tool in the hands of a human, can either turn out to be that which saves all of mankind or be the cause of its demise”

READ  ALIENATION OF PARTICULARLY VULNERABLE TRIBAL GROUPS

[1] SALMOND : Jurisprudence (12l1′ Ed) P.229


[2] Section 1, Sentencing Act 1991

[3] J. E. R. Staddon and D. T. Cerutti, Operant Conditioning, Annual Review of Psychology 2003 54:1, 115-144

[4] Rehabilitation, Oxford Living Dictionary, 2019

[5] BRUMFIELD, G.C. ERIC. “SECURITY SYSTEMS: Protective Measures Against Hackers.” GP, Solo & Small Firm Lawyer 16, no. 4 (1999): 32-37. http://www.jstor.org/stable/23783357.

[6] A term used to refer one who cracks code (used in modern day vocabulary/slang)

Leave a Comment