“A culture of accountability is particularly important for a technology still struggling with standards of reliability because it means that even in cases where things go awry, we are assured of answerability” – Nissenbaum, (1996)
The use of artificial intelligence (AI) in the security and justice sectors has the potential to improve the efficiency of law enforcement and the judicial system, as well as the safety of citizens. They might help with things like identifying criminals and victims, preventing crimes and gauging safety. However, the use of AI in law enforcement and the legal system might have unintended consequences, including the erosion of constitutionally protected liberties – such as the rights to freedom of expression, privacy and protection of personal data. Particularly worrisome is the potential for discrimination and bias to be perpetrated through the use of technologies like facial recognition, predictive policing and recidivism risk assessments. Artificial intelligence applications can pose security threats, since they can be utilised by cybercriminals as attack vectors or as means to commit their crimes.
Given that AIs can exert varying degrees of control over their actions, determining who is financially responsible for any harm caused by their decisions should be a top goal. Any organisation that the law recognises as a ‘person’ is entitled to the protections and liabilities accorded to natural persons. Thus, the subject of whether or not artificial intelligence should be accorded legal personhood is an advanced solution to our existing liability dilemma – yet it is important to weigh the pros and cons of this approach.
Some have even speculated that by learning from human behaviour and experience, AI could one day surpass humans in ‘human-like’ abilities. Artificial intelligence (AI) systems can learn from the world around them. Then, armed with this new information, they can move on with confidence in their choices. Decisions involving multiple factors can be weighed and prioritised with greater precision than by a human being. This is the ‘rudimentary aware’ sense.
Who, for instance, will be liable if an AI is found guilty of inciting violence, sedition or spreading hate speech? What if your wristwatch assistant advises you that members of a given group do not merit protection and are instead aggressive and dangerous? What if an AI programme suggests murder and plots it all out for you? Whose fault would it be if R-TECH created a killer-robot?
According to Hallevy, three fundamental legal models must be considered when investigating AI-related offences:
- Perpetration by a third-party AI liability
- Artificial Intelligence’s natural probable consequence liability
- AI’s direct liability
Perpetration by a Third-Party AI Liability
When an intellectually disabled person, a child or an animal commits a crime, they are treated as innocent actors since they lack the mental capacity to establish a mens rea. This rule also applies to strict liability offences. The person giving the instructions, or the teacher, would however be held criminally responsible if the innocent person carried out someone else’s instructions. An example of this would be a dog-trainer who trains his dog to attack outsiders in the event of a specific scenario. This idea holds that while AI platforms or programmes are regarded as benign agents, the user or system developer may be accused of wrongdoing by another party.
Artificial Intelligence’s Natural Probable Consequence Liability
In this idea, a part of AI software created for beneficial purposes inadvertently activates – leading to the commission of a crime. Hallevy highlighted the example of Kenji Udhara, an engineer at Kawasaki Heavy Industries, who worked for a company where some production activities were handled by a robot. Unfortunately, when Kenji was working on the robot he forgot to turn it off. As a result, the robot saw Kenji as a threat to its mission and decided that the only way to defeat him was to run him into a nearby working machine. The robot’s comparably strong hydraulic arm ruthlessly pushed him into a machine next to it, killing Kenji nearly instantly before returning to its job.
Liability for ‘natural or likely consequence’, commonly known as ‘abetment’, is determined by using this concept. Section 20(1) of the Criminal Offences Act 1960, (Act 29) states: “A person who, directly or indirectly, instigates, commands, counsels, procures, solicits, or in any other manner purposely aids, facilitates, encourages, or promotes, whether by a personal act or presence or otherwise, and a person who does an act for the purposes of aiding, facilitating, encouraging, or promoting the commission of a criminal offence by another person, whether known or unknown, certain, or uncertain, commits the criminal offence of abetting that criminal offence, and of abetting the other person in respect of that criminal offence”. This regulates the responsibility of individuals who facilitate an offence. Hallevy describes how an accomplice can be held liable for an act even if no conspiracy is established, provided the accused’s behaviour was a natural or foreseeable result of the accomplices’ promotion or support, and that they were aware a criminal plot was in progress.
In the case of R vs Bryce [2004] where the court held that: “The appeal was dismissed by the court, which states inter alia that a delay did not negate intention and that it was no duty of the prosecution to show that the appellant’s acts took place at a time that X had not formed intent. Rather, the liability of an abettor was derived from that of the perpetrator of the criminal offence. The court stated that the act must in fact assist the perpetrator; the aider and abettor must have done the act deliberately, realising that it was capable of assisting the offence; he must have foreseen the commission of the offence as a real possibility, and he must when doing the act have intended to assist the perpetrator.”
As a result, if they knew the behaviour was a natural or expected outcome of their AI system’s use, the developers and operators of AI platforms may be held liable for the AI software’s activities. When applying this approach, it is necessary to distinguish between AI systems that were created with the knowledge of criminal purposes and those that were not, as well as between those with legitimate other goals. This idea applies to the first category of AI systems; but owing to a lack of information, the second category may not be subject to prosecution (but strict liability would apply to them).
AI’s Direct Liability
This idea provides an AI system with both mens rea and actus reus. The actus reus of an AI programme can be determined relatively easily. The actus reus of the charge has been met if any action taken by an AI system results in criminal behaviour or a failure to act in a situation where reporting was required. Assigning a mens rea is challenging, hence the three-level mens rea approach is used in this situation. An AI system may be held accountable for the unlawful act in cases of strict liability offences where intent need not be proven or required. In the scenario of a self-driving autonomous car and speeding, where speeding is a strict liability offence, strict liability can be seen in action. Therefore, in accordance with Hallevy’s theory, the law governing the criminal culpability of speeding might be applied to people operating an AI-driven vehicle in a similar manner.
Legal Personality of Artificial Intelligence
Sophia, a humanoid robot with artificial intelligence, was granted Saudi Arabian citizenship in October 2017. In May 2018, Google demonstrated the capabilities of its product Google Duplex, whose AI system can book a hair-appointment or a table at a restaurant while preventing misunderstandings over the phone and mimicking the pauses and hems and haws of human dialogue. A lawyer’s thoughts inevitably move to the matter of AI’s potential legal personality after seeing these robots’ capabilities.
The notion of legal personality in the sense of capacity to be the subject of rights and obligations and to establish one’s own legal situation has been expanded to cover entities grouping together individuals sharing common interests, such as states and commercial entities. They are ‘artificial’ persons, known as ‘legal persons’, created by the humans standing behind them. The detachment of legal persons from the natural persons standing behind them (e.g. authorities and entrepreneurs) occurred over a long process, through the evolution of abstract legal concepts.
The people governing them (as the legal persons’ authorities) shape their legal environment and are the subjects of their rights and obligations. The natural persons acting in support of legal persons typically stay obscured so long as we are dealing with the range of rights and obligations originating under civil law. However, this is different when it comes to criminal responsibility. Consider a disaster that is the fault of a business; it cannot be prosecuted. The natural individuals accountable for the legal entity’s actions in this situation are sentenced to imprisonment under our laws.
Regarding the question of a robot’s legal personality, I should clarify that the same logic holds true for robots that have artificial intelligence. The following traits are referred to as AI: the capacity for communication, self-awareness, worldly wisdom, goal-setting prowess, and a certain amount of creativity. These qualities and abilities are the outcomes of human-written codes which define or programme AI.
Undoubtedly, AI uses cognitive processes to accomplish its stated goals; but given the standard of rights and obligations, this does not seem to be a compelling enough argument to grant AI a legal personality. Giving a commercial entity a legal personality is appropriate because of the human foundation that supports it. As I said, the decision-making and organisational structure of the entity that is penalised are a human responsibility, particularly in the context of criminal liability.
It is difficult to assert a robot that has AI has free will which would cause it to commit crimes in order to further its own objectives. As a result, it cannot be blamed for a specific level of error like carelessness or recklessness. Furthermore, it cannot be held accountable for losses resulting from errors, such as those produced by autonomous vehicles or surgical robots.
AI code may make sure that an AI abides by specific rules, but applying those rules is not the product of deliberate action and cannot, therefore, give rise to responsibility.
We might draw comparisons between robots and animals based on the degree of self-awareness, autonomy and self-determination. However, it is not simply the intellect that some animals exhibit which prompts people to want to protect them legally (and strive to accord them the status of persons); it is also their ability to experience pain, joy or affection, which AI is incapable of.
As a result, the ability to understand social norms and the desire to uphold them – together with the capacity for feeling, is what sets humans apart from other animals. Animals or robots cannot understand, interpret or apply legal norms in complex scenarios encountered on a regular basis.
The characteristics of people and the structure of their social relationships serve as the foundation for the rights and responsibilities which come with having a legal personality. Concepts like responsibility, moral loss and freedom of expression are difficult to understand in the context of artificial intelligence. For this reason, I don’t think there is any rationale for giving robots legal personhood right now. Furthermore, there is no reason for granting robots the ability to own property or enter into transactions on their own behalf. Instead, treating robots as a product in the context of liability for accidents they cause is justifiable.
ELEMENTS OF CRIMINAL LIABILITY
A person can be held liable for the commission of an offence if that offence falls within any of these four (4) broad categorisations: 1. Offences based on Fault; 2. Offences of Strict liability; 3. Offences of Vicarious liability; and 4. Offences of Absolute liability.
Offences of Vicarious Liability
The doctrine of vicarious liability is a strict liability principle that imposes liability on a third party who plays no role in bringing about a particular tort. Its application is therefore limited in scope, and need not be strained because of the obvious injustice that is accompanied by it. On the other hand, it refers to situations in which a person is held responsible for the commission of an offence although the act itself was committed by another person. The offender himself may not be let off the hook but may jointly be held responsible with the one who is vicariously responsible. An instance of such a criminal offence may be when a director of a company commits a crime under the veil of the company’s artificial personality. The director may be held personally accountable as well as vicariously, to the company itself.
In the case of Ansah vs Busanga [1976], 2 GLR 488-500 the court held inter alia: “The appellant’s admission of ownership of the vehicle in question was enough to raise a presumption in favour of the respondent that the second defendant was the appellant’s servant, and that he was driving in the course of his employment at the material time. Furthermore, the appellant disabled himself from discharging his evidential burden of proving that the driver drove without his authority as alleged in his defence when he elected to offer no evidence at the trial, and allowed to go unchallenged the evidence that the second defendant was an employee of the appellant. In the circumstances, therefore, the appellant’s vicarious liability for the negligence of his driver was adequately established, and in the absence of any explanation from the appellant showing either a specific cause not connoting their negligence or that they used all reasonable care in the management, control and driving of the vehicle at the said time and place, the judgment of the trial court would be upheld”
Conditions for the application of Doctrine of Vicarious Liability
The doctrine of vicarious liability seeks to hold a third party liable for the conduct or omission of an employee or servant. It is a strict liability doctrine that in itself is not a delict, but which allows persons injured by the tortious act of a tortfeasor to hold his employer liable for the tort. The principle is rationalised on two main strands: the belief that a person who employs others to advance his own economic interest should in fairness be placed under a corresponding liability for losses incurred in the course of the enterprise; and that the victim should enjoy fair and just compensation from the deepest pocket. It is believed that the employer will put in place measures to prevent injuries to third parties by the employee if it is made to bear the cost of employee-delict.
The strict requirement of an employment relationship has been whittled down by recent decisions in the UK and other jurisdictions. In the case of Woodland v. Swimming Teachers Association [2013] UKSC 66, the court through Lord Sumption opined that: “The boundaries of vicarious liability have been expanded by recent decisions of the courts to embrace tortfeasors who are not employees of the defendant but stand in a relationship which is sufficiently analogous to employment”.
Offences of Absolute liability
This doctrine seeks to hold a person liable for the commission of an offence without reference to his state of mind, which is irrelevant. It is found usually associated with dictatorial regimes and military juntas. All that would make a person responsible for the offence is the act of the accused which is a prohibited act. Certain defences, like a mistake of fact, will not avail him. Here there are no defences available to the one who engages in that prohibited act.
According to this concept, if any enterprise is involved in any hazardous activity and it causes any harm to the people, then it is the liability of the organisation to pay the damages irrespective of the fact that the episode happened or not. This means it is not necessary that the damages will be paid only when the hazardous component goes outside the campus of the enterprise; rather, if it causes damage to the people working inside the campus, then the damages will also be paid by the enterprise involved in the activity.
Absolute liability and strict liability are alike in the absence of any requirement that the prosecution proves intention, knowledge, recklessness, negligence or any other variety of fault. The sole difference between these modes of criminal responsibility is that absolute liability does not even permit a defence of reasonable mistake of fact. Absolute liability is comparatively uncommon in state and territorial law. Instances commonly involve displacement of the common law defence of reasonable mistake of fact by specialised statutory defences which may narrow the scope of the common law defence or place the burden of proof on the accused.
Offences of Strict Liability
A strict liability doctrine is a rule of criminal responsibility that authorises the conviction of a morally innocent person for committing an offence even though the crime, by definition, requires proof of a mens rea. An example is a rule that a person who is ignorant of, or who misunderstands the meaning of, criminal law may be punished for violating it, even if her ignorance or mistake of law was reasonable. This is similar to the offence of absolute liability. The difference is that there is the common law defence of ignorance of fact available to the one who is liable for an offence of strict liability. Mistakes of fact arise when a criminal defendant (accused) misunderstood some fact that negates an element of the crime. For instance, if an individual is charged with stealing but believed that the property he took was rightfully his, this misunderstanding negates any intent to deprive another of the property.
One important qualification, however, is that this mistake of fact must be honest and reasonable. Thus, a defendant cannot later claim that he or she was mistaken when he or she knew the situation. Likewise, the mistake must be one that would appear reasonable to a judge or jury. If the same individual was repeatedly told that the property was not his, and he could not take it, it would no longer be reasonable for him to mistakenly have believed that he could rightfully take the property.
Some of the justifications of Strict liability crimes are that: (1) only strict criminal liability can deter profit-driven manufacturers and capitalists from ignoring the consuming public’s well-being; (2) the inquiry into mens rea would exhaust courts, which have to deal with thousands of ‘minor’ infractions every day; (3) the penalties are small, and conviction carries no social stigma; when interpreting a statute, courts will presume that parliament did not intend to create a strict liability offence unless this intention is made unambiguously clear
Conclusion
The introduction of AI has altered the definition and application of ‘legal personality’. In the future, the idea of granting legal status itself may be expanded in accordance with societal demands. There doesn’t appear to be a clear-cut standard for determining whether or not to give AI legal personhood.
The idea that a killing machine with AI may be recognised as a legal person is categorically rejected. Entity-Centric Methodology – which explains the attribution of legal personhood by law to any entity – can be used by lawmakers to give juristic personality to Artificial Intelligence. Our civilisation will take on a new facet thanks to strong AI.
Artificial intelligence is preferred because it will allow our legal system to adapt to this technological revolution without requiring significant changes, and will prevent the advancement of technology from being separated from societal concerns. Strong AI would be independent and could be more likely to attract legal liability as a result of its conduct. Its developers or owners will be held liable if it is not made to answer for its own deeds.
References
Andrade, Francisco, Novais, Paulo, Machado, José, Neves, José: Contracting Agents: legal personality and representation. Artificial Intelligence and Law. 15, 357-373 (2007)
Duff P.W, ‘The Personality of An Idol’ [1920], Cambridge Law Journal
Fitzgerald P. J., Salmond On Jurisprudence, 12 Edn. 1966, Universal Law Publishing Co, New Delhi.
J G. Fleming, The Law of Torts (1992) 367.
J W Neyers, ‘A Theory of Vicarious Liability’ (2005) 43 Alta L Rev 287.
Karnow, Curtis E. A. “Liability for Distributed Artificial Intelligences.” Berkeley technology law journal 11, no. 1 (1996): 147-204.
Katz, Leo, and Alvaro Sandroni. “Strict Liability and the Paradoxes of Proportionality.” Criminal law and philosophy 12, no. 3 (2018): 365-73.
N.V. Paranjape, Studies In Jurisprudence And Legal Theory, 3rd Edn. 2010, Central Law Agency, Allahabad.
Studley, J., Bleisch, W.V ‘Juristic personhood for sacred natural sites: A potential means for protecting nature’ (2018), Parks
The writer is a PhD candidate, CEPA, CFIP, ATA MIPA, ChMC, AMCFE and Researcher
Contact: 0246390969 – Email: [email protected]