Generative Artificial Intelligence and the law – the beginning or the end?


The rapid evolution of artificial intelligence (AI) has instigated transformative shifts across diverse global sectors, encompassing healthcare, finance, education, and governance. AI technologies have progressively assumed a pivotal role in propelling the world’s developmental trajectory.

Contrary to common knowledge, AI technologies have a longstanding presence. Since the 1980s, manufacturing plants, particularly those dedicated to automobile production, have incorporated AI technologies. These sophisticated systems operate autonomously, exemplified by their ability to comprehensively assemble components of an automobile, conduct reconnaissance for mining and mineral operations, or vacuum your apartment—all without direct human involvement.

AI technology in modern time has started to go beyond just industrial uses. People are realizing the huge potential of fully self-operating machines, and now generative AI is being used in different areas, and the legal realm has not been left far behind.

Several weeks ago, Forbes reported that The Bill & Melinda Gates Foundation disclosed a substantial investment of US$30 million in a novel AI platform designed for deployment in Africa. The foundation asserts that this platform will assist scientists in developing solutions for healthcare issues across the continent.

Bill Gates, in an interview with the esteemed tech YouTuber Arun Maini, highlighted the substantial benefits that this AI platform is anticipated to bring. Seeing as how doctors are in short supply in rural Africa, the platform aims to swiftly diagnose users’ health conditions and recommend appropriate medications within a matter of minutes, all without needing human intervention.

In the realm of torts and civil wrongs, a pivotal inquiry emerges: should the Artificial Intelligence (AI) platform be held to the identical standard as a conventional medical practitioner? In scenarios where users suffer harm due to misdiagnosis, erroneous prescription of medication, or potential dosage irregularities, the allocation of responsibility becomes a pressing concern.

The landmark legal precedent, Donoghue v Stevenson, establishes that if a product is found to be defective in a manner reasonably foreseeable by the manufacturer, said manufacturer may be held accountable not only for the defect itself but also for any resultant harm suffered by the consumer.

The application of this principle to AI introduces intricate considerations. The developers of a “medical AI” may lack medical education and qualifications, while concurrently, medical professionals and experts may lack the technical proficiency necessary for the development of beneficial medical AI systems.

The potential misalignment between the expertise of the creators of a medical AI and the critical healthcare domain in which the AI functions poses inherent challenges in attributing liability in instances of AI-induced harm. Consequently, this prompts a reevaluation of legal standards and responsibilities in the intersection of AI technology and healthcare, considering the unique characteristics and challenges posed by these advanced systems.

Another plausible application of Artificial Intelligence within the legal realm involves utilizing generative AI for seeking legal advice. Given the prevalent accessibility of AI today, an average citizen facing a pressing legal inquiry might readily input their questions into an AI for swift feedback—particularly considering its cost-free nature in contrast to an actual lawyer who may charge for similar services. There is no inherent obstacle to employing Artificial Intelligence for obtaining straightforward legal advice, particularly given the abundant availability of statutes from Ghana’s parliament on their official website, and multiple online depositories of Ghanaian judicial decisions.

These resources are readily accessible to AI, enabling it to retrieve and interpret pertinent information to effectively address user queries.

One must thus contemplate the idea that while humans have traditionally navigated the application of the law in the pursuit of justice, there’s a question of whether AI could potentially excel in this domain. A cursory Google search for “AI LAWYER” yields numerous websites offering legal analysis through generative AI, often at a lower cost than hiring a human lawyer, and delivering the service much almost instantly.

For example, LegalRobot is an AI-driven platform that helps users understand and draft legal documents, such as contracts, with ease. DoNotPay is AI-powered chatbot that simplifies the process of handling various legal issues, including consumer rights, parking tickets, and small claims disputes.

These AI systems operate on binary switches and, in some instances, may demonstrate proficiency in applying the law, potentially outperforming humans.

Consider a legal scenario involving the application of section 1(1) of the Sale of Goods Act, Act 137, which defines a contract of sale of goods. The provision states, “a contract of sale of goods is a contract whereby the seller agrees to transfer the property in goods to the buyer for a consideration called the price, consisting wholly or partly of money.”

When AI engages with this provision, it systematically poses questions:

  1. Is there an enforceable agreement, a contract? If not, there is no sale of goods contract.
  2. Did the seller agree to transfer property in the goods to the buyer? If not, there is no sale of goods contract.
  3. Did the seller agree to transfer the property in the goods to the buyer for consideration? If not, there is no sale of goods contract.
  4. Was the consideration consisting wholly or partly of money? If not, there is no sale of goods contract.

The AI is likely to yield a positive answer only if all these questions receive affirmative responses. Is the mechanical application of the law in this manner particularly detrimental? Not entirely, because there are some schools of thoughts who believe this literal application is preferred.

In fact, there is a strong belief that AI may excel in certain legal areas, such as Tax Laws, characterized by technical complexity and strict literal interpretation. However, the question arises as to how well AI would fare in analyzing constitutional law issues, where interpretations are seldom literal and often employ modern purposive approaches to reach conclusions.

By use of these mechanical binary switches, AI is likely to be unfaulted in the literal application of the law. Where liable to an offence is strict, AI is plausibly expected to be better at applying the law than humans, and yet anyone who understands the discipline of the law understands that it seldom lends itself to a strict and literal application.

However, how would AI, with its unempathetic and strictly mechanical approach, understand the profound words of Sowah JSC (as he then was) in the renowned decision of Tuffour v Attorney General 1980 G.L.R 637? How could AI comprehend and implement the diverse rules of interpretation, discerning when to apply the mischief rule instead of the literal rule? This may arguably be a capability uniquely inherent to humans.

In the realm of AI’s application in the legal field, a critical consideration emerges concerning the ethical standards to which AI should be held, especially in comparison to practicing lawyers. While AI functions as information systems, the question of ethical accountability looms large.

Unlike human practitioners who adhere to established ethical codes and professional conduct, AI lacks the inherent capacity for ethical discernment. How will generative AI, provide legal answers, in cognizance of General Codes of Conduct of the General Legal Council of Ghana? How will medical AI, produced outside of Ghana, provide medical advice in cognizance of the Ghana Health Service Code of Conduct and Disciplinary Procedures?

This basis analysis necessitates a nuanced exploration of legal and ethical responsibilities in the utilization of AI. Should AI be regarded as a mere tool, or does its role in processing sensitive legal information demand a distinct set of ethical guidelines and liabilities? Striking a balance between the undeniable advantages of AI in legal applications and the imperative to safeguard ethical and legal standards becomes a paramount consideration in the ongoing integration of AI into legal and medical practice.

While AI may not inherently be deemed as “bad lawyers,” the pivotal question revolves around the extent to which they should be entrusted with the reins of legal practice. In China, robot judges decide on small claim cases, while in some Malaysian courts, AI has been used to recommend sentences for offences such as drug possession.

The prospect of ceding control entirely to AI prompts reflection on the essence of human intervention in the utilization of these advanced technologies. While AI offers unparalleled efficiency and data-processing capabilities, it lacks the nuanced understanding, ethical discernment, and contextual comprehension inherent in human practitioners.

Human intervention in the implementation of legal and medical AI may be crucial to temper its potential shortcomings, ensuring a harmonious integration of technological advancements with the essential qualities of empathy, ethical judgment, and legal expertise intrinsic to human lawyers. Striking the right balance between AI’s capabilities and human oversight becomes imperative to harness the full potential of these technologies without compromising the fundamental principles and values that underpin the legal profession.

In the interim, until we succeed in developing AI systems that genuinely embody human qualities, it is imperative that AI not assume a leading role in the legal or medical domain. Adopting a judicious approach, the collaborative deployment of AI alongside human expertise offers a nuanced strategy. Human lawyers and doctors can leverage AI for accelerated information retrieval and employ generative AI for legal research and analysis.

However, it is crucial to acknowledge that AI should serve as a starting point for both lawyers and users alike. AI’s interpretation of legal and medical matters cannot be considered definitive; rather, it should be regarded as a recommendation for further consideration. This underscores the importance of human intervention in refining and contextualizing the insights provided by AI.

It may therefore be considered, that if AI, in essence, remains reliant on human input and judgment, questions arise as to the necessity of its application at all. Balancing the strengths of AI with the indispensable qualities of human judgment becomes paramount in optimizing the collaborative potential of these technologies in the legal and medical realm.

>>>the writer is a development researcher from the University for Development Studies and a human rights activist with a knack for innovative problem-solving. Currently pursuing law at GIMPA, he passionately advocates for the future. His work, blending law and development, focuses on the transformative potential of social innovations for driving meaningful change. He can be reached via 0504816655 and or [email protected]

Leave a Reply