By Kwesi Papa OWUSU-ANKOMAH
The rise of artificial intelligence (“AI”) in recent years and its gradual incorporation in several facets of human endeavours opens up opportunities for rapid technological advancement with its attendant benefits.
The widespread adoption of AI with all its advantages does not negate the fact that such systems occasionally fail, cause harm to innocent people, and present novel liability questions which the current legal landscape arguably may not be best placed to remedy due to some limitations.
Businesses using AI also run the risk of exposure to indeterminate risks and liability owing to the lack of certainty of the law as it relates to AI.
This article is a first in a series which examines AI in the broader context of the existing legal framework, its challenges, the need for legislation and relevant proposals. This premier article focuses on the limitations of the existing liability regime, risks posed to businesses and practical steps to mitigate such risks. The next in the series will address the need for legislation to bridge existing legal gaps, properly govern AI-related disputes, and proposals for consideration.
Introduction
AI is the use of computer systems to perform complex tasks typically performed by humans using human intelligence. AI can be predictive (using historical datasets/information to predict future outcomes such as demand and trends) or generative (using large datasets/information to generate new content like text, media, and sound).
AI has simplified tasks, improved service delivery and expanded the limits of machine capability given its autonomous learning features such that it is fast becoming an essential tool wherever technology is deployed. However, there are significant legal risks faced by all actors in the AI supply chain.
Legal risk and the liability question
As with most fault-related situations, every party capable of being faulted generally tends to shy away from taking responsibility. The remarkably intricate nature of AI supply chains and its complex multi-layered networks make it difficult for a wronged party firstly to know the multiple actors in the supply chain, determine the role each actor played, identify the exact cause of failure, the actor at fault or apportion blame between actors in the AI supply chain.
The liability question is not only limited to identifying the actors in the AI supply chain but exploring the best cause of action available with the highest chance of success for any harm caused by an AI system/tool. Ordinarily, the sheer wealth of common law decisions and principles developed over the years should be sufficient to deal with any issue that may arise from the deployment and use of AI.
However, the constantly evolving and nuanced AI landscape unearths new perspectives which existing liability regimes have not considered. Indeed, complex issues arise in relatively settled areas of law such as contract and torts particularly when the use of AI is central to such disputes. In Ghana, the paucity or lack of decisions on how the courts have applied long-established legal principles to AI-related disputes does not bring any clarity.
Limitations of existing liability regimes
While traditional concepts of law may be employed in determining liability in AI-related disputes, it is not without its limitations particularly when the development of these concepts far predate the emergence of AI. The novel liability questions and limitations with existing legal concepts applicable to AI are discussed below.
Contract: Traditionally, parties are allowed to contract freely unless any term of the contract is illegal, unconscionable or contrary to public policy. So, if a software developer carefully crafts a User Agreement (agreements that typically regulate software use), a significant portion of liability for any AI failure could easily be limited. Where liability is limited, compensation to an end user may be miniscule and not commensurate with the loss suffered.
Likewise, where there is no direct contractual relationship between the end user, third-party software developer/manufacturer or AI system provider (as it usually is), there may not be a remedy available to a wronged party based on the principle of privity of contract (which means that there has to be a direct contractual relationship between parties to enable a party rely on the provision of a contract). Statutory provisions which allow third parties to rely on the provisions of a contract where the contract appears to confer a benefit on the third party[1] may not afford much protection to a wronged party particularly where the wronged party does not know the contents of such contracts.
Also, it is unknown whether the guaranteed statutory protections under the Sale of Goods Act[2] (“SGA”) apply to AI systems. Goods are defined under the SGA as “movable property of every description” while ‘property’ is defined as “general property in the goods and not merely special property”[3]. It could be argued that goods being movable property of every description is broad enough to encompass AI. If the courts follow this interpretation, implied terms as to quality and fitness for purpose will apply to AI systems. However, where AI has continuous learning functionalities and ability to make independent decisions, can it be said that an AI system is not of good quality or fit for purpose when it produces irrational outcomes considering that no party has control over the autonomous learning ability of such systems?
Interestingly under English law, software supplied electronically is not considered as goods under the English Sale of Goods Act. The English courts have interpreted goods under its Sale of Goods Act to exclude intangible computing software (not supplied in any tangible form like an internet download) which is distinguishable from hardware.[4] Although there seems to be industry consensus that AI is software (even though it can be embedded in hardware), it is unclear whether the courts will adopt this approach.
It is also uncertain if the code underpinning an AI tool which is generally considered intangible but arguably tangible can be categorised as ‘goods’. Ghanaian law tends to follow English law (unless there are statutory exceptions) and it remains to be seen how the courts will treat this question when determining a dispute considering English and Australian decisions finding that software downloaded over the internet does not constitute a ‘good’ under their respective Sale of Goods Act.[5]
Arguably, statutory protections like fitness for purpose and quality may well apply to predictive AI. However, its application to generative AI where independent decisions are made based on data inputted into the AI system over time is not as straightforward. This loophole in the law poses as a major limitation to using traditional principles of contract law in addressing challenges posed by new technologies such as AI.
Tort: Typically, where a remedy in contract is not available, principles of tort law are applied. Under ordinary legal principles, a party suing in tort must demonstrate that (1) there is a duty of care, (2) a breach of that duty, (3) damage caused by the breach, and (4) that the damage suffered is not too remote. The law recognises some relationships as special relationships (such as manufacturer and ultimate consumer, medical professionals and patients) where a duty of care is imposed.
On its face, it may seem that a special relationship exists between a manufacturer of an AI system and the ultimate consumer/end user. However, it is important to point out that the traditional duty of care between manufacturer and consumer related to products that were not autonomous or capable of independent decision making.
In the context of AI, it may not be easy for a wronged party to establish a duty of care particularly when the entity likely to be held responsible (say a developer/manufacturer) may not retain control over the AI product/system especially its autonomous learning features. On successfully establishing a duty of care, the true extent of the duty of care owed by a developer to the end user and whether that duty is throughout the lifecycle of the AI system or the standard of care remains unclear.
Presently, the test for the standard of care applied in our courts is the human standard of care. The human standard of care is what an average reasonable professional with similar experience would do given the same set of facts. There is no equivalent or comparable standard of care for AI developed by the courts or even generally accepted industry standards for AI outcomes. In the absence of set or a general consensus on standards, ascertaining whether reasonable measures were instituted to prevent harm in an AI product is potentially exigent.
A compelling argument can be made that the applicable standard of care is that of a reasonable developer or manufacturer creating an AI system or product. However, this standard does not take into account machine learning functionalities of AI which may not be foreseeable. Transposing the human standard of care to AI is problematic and not fit for purpose – the test for AI may well be higher than that of humans depending on an individual’s view on whether AI systems or solutions are smarter than humans. Owing to the evolving nature of AI technology, industry standards remain undeveloped and any objective tests applied by the courts may well not fit the nuanced and advanced AI space especially in Ghana where AI is in its infancy and unregulated. Internationally, AI is predominantly self-regulated and in countries where regulations exist, proper standards are not set.
Even after the duty and standard of care is established, identifying the negligent party in majority of cases is potentially insuperable due to the multiple actors and intermediaries in the AI supply chain which has been discussed earlier in this article. Where the party responsible is identified, recourse will be had to the customs of AI industry to determine if reasonable measures were put in place before releasing the AI system to market. AI customs are still evolving and this may hinder a wronged party from proving breach of a duty of care.
On scaling the duty, standard of care and breach hurdle, there are additional difficulties with causation (essentially proving that the breach of the duty of care directly caused damage). Showing a causal nexus between the developer/manufacturer’s actions and AI outcomes long after the developer/manufacturer has released the AI system onto the market, and ceased to have control over it even when updates are regularly released is almost impossible to accomplish.
Where there is a break in the causal chain or any damage suffered can be attributed to a number of parties including the end user, the end user may well be contributorily negligent. Likewise, all or multiple actors in the AI supply chain may also be responsible for damage occasioned by an AI product and there may be challenges with apportionment of liability. Similarly, the unpredictability of AI systems makes it onerous to prove that any loss suffered was foreseeable. The cumulative effect of these limitations makes tortious liability in AI extremely difficult to establish.
Product Liability: Ghanaian courts have not had the opportunity to clarify/pronounce on whether AI is properly considered as a service or product. The lack of certainty in this area creates an egregious gap in the law especially when a party is seeking remedy for harm caused by AI. Internationally, courts have taken the position that software is not a product for the purposes of product liability law.
[6] However, there is a strong compelling argument that AI could be a service or product depending on its function (for example where AI is so embedded and inextricable from the hardware of a product). Where AI is considered a product, the existing product liability and consumer protection regimes could provide a remedy for defective AI-enhanced products although in Ghana consumer protection framework is not under a composite legislation.
Similar to tortious liability, proving that an AI product is defective is an uphill task. Firstly, there is no clear definition of what constitutes a defective AI product. Mere malfunction of an AI tool does not amount to a defect just as subpar performance of a tool/product cannot necessarily be ascribed to a defect. Relatedly, an unanticipated or abnormal AI outcome coupled with the fact that AI output is predominantly based on inputs made by the end user makes demonstrating a defect onerous. Furthermore, the lack of consensus on acceptable industry standards makes proving an AI defect even more daunting. It must be clarified that the mere existence of an industry standard does not take away the power of the courts to decide that a prevalent industry standard is unsatisfactory or generally lagging behind.
Secondly, in assessing defectiveness, the courts will need to take into account product safety standards/requirements for AI. Considering that safety standards in AI are virtually non-existent, and the burden of proving that a product is defective is on the party asserting this fact, the chances of success is relatively low. A party may well need to have an in-depth understanding of the underlining code on which an AI tool was built which adds an extra layer of complexity. Demonstrating that the AI defect is a manufacturing defect as opposed to one that arose as a result of autonomous machine learning functionality is a very technical hurdle to surmount. Different AI-enhanced product analysis test will also lead to different results as to whether a product is truly or objectively defective.
Although AI systems may qualify as products, the degree of protection afforded to consumers under product liability albeit existing is practically unfeasible as a result of the high threshold set by the law.
Given the limitations of existing traditional legal concepts in adequately providing remedies for AI related harms, there is a need for legislation to fill the liability gap which will be discussed in next article in this series.
Mitigating risks
In spite of the uncertainty, businesses must put in place liability mitigation efforts to safeguard its interest. The measures to be instituted depends on which side of the supply chain a business is involved in and the peculiar risks it faces.
Stating capabilities: Clearly outlining the capabilities of AI systems to avoid any ambiguity about its functions and capabilities as well as stating the dangers of misuse in user manuals mitigates risks of this novel technology. Challenges with generative AI and mitigating such risks is discussed in this article under limiting/excluding liability below. For business that use AI in provision of services, human oversight is a key way of mitigating risk associated with AI technology.
Limiting/Excluding liability: Limiting liability for this innovative technology is a must. Essentially, the performance of every AI technology depends on the dataset which is used to train the AI from its development stage to deployment. At the development stage, it is almost impossible to project how AI technology will perform particularly with generative AI which learns, develops and improves over time.
For generative AI, liability should be limited for actions that the AI system autonomously learns and develops or make it clear that the end user assumes the risk associated with its learning. Another approach will be to contract that the end user assumes risk after a number of years where the AI system can be considered to have learnt enough.
Alternatively, liability can be excluded for machine learning features of AI given that accuracy of AI systems turn on the information that is fed into it, and erroneous information will almost certainly lead to erroneous results which cannot be controlled on the development, manufacturing or supply side. Liability arising from AI in some cases is potentially indeterminate particularly privacy concerns as a result of errors by voice assistants or personal injury/death arising from automation such as the cruise control or lane assist features of cars Allocation of risk: Pivotal in any AI risk mitigation exercise is allocating risk. The preoccupation of end users reviewing AI contracts will be ensuring that there are remedies available where any injury is suffered. Where risks are not allocated in manner reflective of each actors’ role in the AI process, end users typically go after the entity with deep pockets or high chance of success. Users of AI who in turn use the AI system for the benefit of third parties will have to consider the allocation of risk to ensure that it is not unnecessarily exposed to liability for defects and other latent issues not within its control.
Insurance: Business risk insurance policies are good tools to protect businesses using AI although the cost involved in taking out such policies may well be prohibitively expensive. However, careful allocation of risk coupled with insurance policies is a good starting point. Businesses owe it to themselves to assess risks, mitigate those risk through negotiations weighing the business needs of the AI solution and associated risk to the business.
Operational considerations: Ideally, all AI contracts must have continuous improvement clauses where performance of the AI is monitored to ensure that it is performing at its optimum and updates/improvements implemented where necessary to augment its learning abilities. This is primarily to ensure that AI systems are keeping up with cutting-edge technology. On the supplier side, attention must be paid to the post-contractual phase where end-user action does not lead to a data breach (customer’s own data and that of third parties) which would expose the supplier to liability.
Path ahead
At first glance, it may seem that the risks associated with AI may be too complex for a business primarily concerned with making profits. However, the benefits of AI when deployed in a calculated manner far outweigh its risks. Ultimately, deployment of AI is a business decision which should be taken considering the needs of every business with guidance from the right professionals.
This area of law remains in its infancy and businesses should consult experts to assist in navigating the unique opportunities and challenges posed by AI systems, products and services.
The author welcomes discussions on this topic and can be contacted via email at [email protected].
[1] Contracts Act, 1960 (Act 25), s. 5(1).
[2] Sal of Goods Act, 1962 (Act 137); s. 13.
[3] Ibid., s.81 defines goods as” movable property of every description, and includes growing crops or plants and other things attached to or forming part of the land which are agreed to be severed before sale or under the contract of sale”.
[4] See St. Albans City and District Council v International Computers [1996] 4 All ER 481 which was followed and applied in Your Response Ltd v Datateam Business Media Ltd [2015] EWCA Civ 281, and Computer Associations UK Ltd v Software Incubator Limited [2018] EWCA Civ 518.
[5] Ibid., see Medical Research Pty Ltd v Comrad Medical Systems Pty Ltd [2010] NSWSC 267.
[6] Uniform Commercial Code, s.9 – 102, Definition and Index of Definitions, Item 42 “General Tangible” and also American Law Institute, 2010, s. 19 where products are defined as “tangible personal property”, software is generally considered intangible; America Online, Inc. v. St. Paul Mercury Insurance Co. 207 F.Supp.2d 459 (E.D. Va. 2002) and loc. cit.; n. 5.