Policy perspective – contemporary issues in determining the criminal liability of artificial intelligence (part 1)

0

“The field of Artificial Intelligence is set to conquer most of the human disciplines; from art and literature to commerce and sociology; from computational biology and decision analysis to games and puzzles.” ~ Anand Krish 

Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a wide range of innovations including autonomous vehicles and connected Internet of Things devices in our homes.

AI is even contributing to the development of a brain-controlled robotic arm that can help a paralysed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionising and benefitting nearly all aspects of our society and economy: everything from commerce and healthcare to transportation and cybersecurity. But the development and use of new technologies it brings are not without technical challenges and risks.



One of the most revolutionary inventions of our time is artificial intelligence (AI). AI and Machine Learning (ML) are changing the way in which society addresses economic and national security challenges and opportunities. It is being used in genomics, image and video processing, materials, natural language processing, robotics, wireless spectrum monitoring, and more. These technologies must be developed and used in a trustworthy and responsible manner.

While answers to the question of what makes an AI technology trustworthy may differ depending on whom you ask, there are certain key characteristics that support trustworthiness – including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, security (resilience), and mitigation of harmful bias. Principles such as transparency, fairness and accountability should be considered, especially during deployment and use. Trustworthy data, standards and evaluation, validation and verification are critical for the successful deployment of AI technologies.

The technical capabilities and limitations of AI, along with ethical concerns about the effect of AI systems, give rise to new legal difficulties in areas including product liability, negligence, intellectual property etc. Artificial intelligence (AI) is one of the most difficult and quickly-evolving fields of law, due to the worldwide attention it receives from regulators and lawmakers.

Gabriel Hallevey raises a related question: would we be allowed to sue smart robots for criminal liability if they assaulted us? In what ways can we legally fight back? The field of technology has evolved at a lightning rate. Robots are taking over simple tasks that used to require human effort. There is no essential distinction between robot and human drivers, vehicles, or mobile devices so long as humans employ them as simple tools. When robots become advanced, we usually say that robots ‘think’ for us. The difficulty occurred when robots transitioned from ‘thinking machines’, becoming thinking machines (without quotation marks) or artificial robots with artificial intelligence. Do you think they pose a threat?

There were various incidents in which robot/AI ‘killed’ humans – such as in 1981, when a 37-year-old Japanese employee of a motorcycle manufacturer was killed by an artificial intelligence robot working beside him. There are more: like the Volkswagen robot in Germany in 2015, the robot that killed a scientist in Japan, the artificial intelligence robot that shut down Facebook when they started chatting, and the citizen robot that goes by the name ‘Sophia’.

Most forms of artificial intelligence nowadays employ some form of machine learning or reinforcement learning to tackle massive datasets. Like humans, the AI can learn and improve at its task over time without being explicitly programmed to do so. To provide users with what they call ‘unique, tailored experiences’, major Internet firms are combining AI with reinforcement learning techniques.

Existing AIs have already shown a startling ability to make unpredictable choices in a variety of contexts. Many AIs have been implicated in fatal accidents when it is unclear how much responsibility they should bear. It’s caused others to worry that there will be atrocities committed where no human can be held responsible as in the case of Alibi per section 131 of the Criminal Procedure Act, 1960 (Act, 30) hereinafter called Act 30.

In addition, when an AI does anything that cannot be attributed to human behaviour, additional difficult concerns arise. AI has the potential to act in ways that are both unexpected and mysterious. Many AIs rely on techniques whereby a computer programme is designed by humans but then evolves in reaction to input without further programming.

This means the AI is capable of behaviour that its creators may not have anticipated. Reasons or mechanisms for an AI’s actions may be difficult, if not impossible, to ascertain. While it is conceivable in principle to shed light on how an AI may react in such circumstances, doing so would be prohibitively resource-intensive. When an AI exhibits unpredictable and autonomous behaviour, the possibility of assigning criminal liability to the AI itself becomes an issue.

The existing criminal jurisprudence does not have the rules and regulations to deal with crimes committed by AI systems, which are irreducible to humans. Because of this, it’s crucial to address the question of how to hold an AI accountable for unlawful acts. However, Section 5 of the Criminal Offences Act, 1960 (Act, 29) hereinafter called Act 29, tends to grant us solace in respect of the position above.

The case of Republic vs Military Tribunal: Ex Parte Ofosu-Amaah & Anor (1973) emphasises the same position as stated in section 5 of Act 29 (supra) as follows: The appellant has been convicted of conspiracy to commit subversion. They contended that no such offence existed under the offence-creating statute. The court held that, if any conduct is alleged to be an offence under any law, the accused can in addition to the substantive offence be charged with other offences in Part 1 of the Act, although the other enactment does not specifically mention the offence in the part of the Act. Therefore, the language of Section 5 is wide enough to allow the formulation of a charge of conspiracy, although the offence did not exist under the particular enactment.

Additionally, section 13(1)-(2) of Act 29 says: “A person who intentionally causes an involuntary agent to cause an event shall be deemed to have caused the event. (2) For the purposes of subsection (1), ‘involuntary agent’ means an animal or any other thing, and also a person who is exempted from liability to punishment for causing the event, by reason of infancy, or insanity, or otherwise, under the provisions of this Act.

In contrast, Article 19(5) & (11) shares a different view: (5) A person shall not be charged with or held to be guilty of a criminal offence which is founded on an act or omission that did not at the time it took place constitute an offence. (11) No person shall be convicted of a criminal offence unless the offence is defined and the penalty for it is prescribed in a written law”.

Ghana recognises that as part of the inherent nature of technology issues relating to data subject rights, data controller responsibility, and regulatory oversight and efficiency require a legal framework that ensures data subject privacy rights are not violated in the pursuit and implementation of technology by data controllers.

The 1992 Constitution of the Republic of Ghana (‘the Constitution’) is the supreme law of Ghana and the instrument from which every piece of legislation derives its validity in country. The primary legislation that protects data privacy is the Data Protection Act, 2012 (‘the Data Protection Act’). The purpose of the Data Protection Act is to establish a Data Protection Commission (DPC) to protect individuals’ privacy and personal data by regulating the processing of personal information; to outline the process to obtain, hold, use, or disclose personal information; defining the rights of data subjects, prohibited conducts of processing; third-country processing of data relating to data subjects covered by the Act; third-country data subject processing in Ghana, and related matters.

If we’re going to dig into this problem, we’ll need to do the practical work of considering whether or not we can assign mens rea and actus reus to AI. To be criminally liable, an AI creature needs both mens rea (deliberate intention) and actus reus (a physical action). In the case of Deepa and Ors v. S.I. Of Police and Anor, it held that: “Normally a charge must fail for want of mens rea but there may be offences where mens rea may not be required. But actus reus must always exist. Without it there cannot be any offence. Mens rea can exist without actus reus, but if there is no actus reus there can be no crime. Even if the mens rea is present, there can be no conviction without the actus reus”.

However, if artificial intelligence (AI) offers no positive benefits, or if the negative implications of direct criminal punishment on AI outweigh the beneficial consequences, it may not be prudent to impose criminal liability on AI. Furthermore, if there are superior alternatives that can deliver roughly the same (or more) advantages as direct AI punishment, it may not be practicable to impose criminal liability on an AI. In this light, the article will investigate the feasibility of directly imposing criminal liability on an AI in situations where the crime committed by it is irreducible.

There were even trials and punishments meted out to animals in the Middle Ages. As it was not until the 18th century that it was determined that animals lacked the mens rea necessary for criminal liability, some have claimed that criminal liability imposed directly on AI would be the same as putting it on animals. However, this line of thinking treats AI like a simple machine that can only do what its programmers and users tell it to. Gabriel Hallevy contends that all the elements of mens rea (knowledge, intent, negligence, etc.) needed to impose criminal liability can be satisfied by AI.

Most AI systems are well-equipped with sensory sensors of sight, voices, physical contact, caresses, etc. for such a reception, and knowledge is defined as a sensory reception of factual material and its processing. In a method strikingly similar to how the human brain works, these receptors send the incoming data to the central processing units for analysis. Therefore, it might be argued that an AI satisfies the necessary level of expertise for a crime.

To commit an offence with specific intent means that the offender intended for a certain outcome to occur. To function, most AIs are taught to establish goals and then take steps to realise those goals. The goals of AI systems are often predetermined. Therefore, AI can likewise be credited with having a deliberate purpose.

When assigning blame for certain crimes, such as acts motivated by hatred, emotions must be considered. Because of their lack of empathy, AIs cannot be held accountable for these types of crime. The majority of crimes, however, do not necessitate the use of emotions to determine blame. Lack of intent is not an absolute defence against responsibility. In light of this, some have suggested that AI can constitute mens rea for the majority of crimes.

However, the Chinese Room Argument raises doubts about the accuracy of this attribution of mens rea. Can it be stated that an AI genuinely comprehends what it is processing, even if it has sensory receptors that give it input that may be processed internally? The next analogy should assist clarify this argument:

An English-speaker who doesn’t know Mandarin is imprisoned in a room with a computer programme created in English that processes symbols. A note written in Mandarin is placed under the door by its authors. To respond, the English-speaker uses a symbol processing tool to convert his or her written words into Mandarin. The Chinese outside the room mistakenly believe the man inside knows Mandarin, even though he does not fully comprehend either the note from the people or his reply. Instead, he simply followed the programme’s instructions and generated a response. In a similar vein, it is stated that AI blindly execute a pre-written programme written in a computer language without fully comprehending the meaning of the questions it is being asked, or the nature of the responses it is expected to provide (which may take the shape of action).

Artificial intelligence (AI) may identify specific situations and act by its predetermined instructions or its acquired knowledge. In John Searle’s view, artificial intelligence (AI) cannot understand the context of its acts, since once it recognises a situation it merely imitates the actions of others who have faced the same scenario before it or responds mechanically by the rules. It has been suggested that artificial intelligence lacks the essential mens rea for criminal guilt since it cannot appreciate the meaning and, by extension, the consequences of its acts. The debate around this point is far from settled and fraught with controversy. Therefore, it would be incorrect to assign mens rea to a computer.

If an AI commits an act or omission and it is free to control its mechanism and its pieces, then it can be held responsible under the law as if it were a human.

In conclusion, once you grant AI the ability to generate the necessary mens rea, the question of whether or not to punish AI systems naturally follows. My inquiry has switched from “can we do it?” to “should we do it?” According to Hallevy, there is no reason to exclude criminal liability of the AI if both mens rea and actus reus standards are met. While these factors alone are sufficient to establish criminal liability, it is still important to weigh whether or not there are positive benefits to criminally punishing AI, and whether or not there are better, or at least realistic, alternatives to such impositions.

Part 2 of this article will consider the principle of causation, AI risk management and control; and finally, the readiness of Ghana to adopt AI operations.

Reference:

  1. The 1992 Constitution of Ghana
  2. Criminal Offences Act, 1960 (Act 29)
  3. Criminal Procedure Act, 1960 (Act 30)
  4. Data Protection Act, 2012 (Act 843)
  5. Anonymous, Robot Kills Worker at Volkswagen Plant in Germany, https://www.theguardian.com/world/2015/jul/02/robot-kills-worker-at-volkswagenplant-in-germany (2015) accessed on Wednesday, April 10th, 2017
  6. Chris Pehura, 10 Algorithm Categories for AI, Big Data, and Data Science, https://www.kdnuggets.com/2016/07/10-algorithm-categories-data-science.html (2016), accessed on Tuesday, April 9th, 2019
  7. Colin Fernandez, Robot kills factory worker after picking him up and crushing him against a metal plate at Volkswagen plant in Germany, https://www.dailymail.co.uk/news/article3146547/Robot-kills-man-Volkswagen-plant-Germany.html (2015), accessed on Tuesday, April 9th, 2019
  8. Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control, 4 Akron L. Journal 2, 2010.
  9. Gabriel Hallevy. Dangerous Robots – Artificial Intelligence vs. Human Intelligence https://ssrn.com/abstract=3121905 or http://dx.doi.org/10.2139/ssrn.3121905 (2018), accessed on Tuesday, April 9th, 2019
  10. Mireille Hildebrandt, Ambient Intelligence, Criminal Liability and Democracy, 2 Crim L. & Philos. 163, 164-170 (2008).

The writer is a Ph.D. candidate, CEPA, CFIP, ATA MIPA, ChMC, AMCFE, and Researcher

Contact: 0246390969     – Email: [email protected]

Leave a Reply