Artificial Intelligence’s promise and peril

0

Generative AI is poised to unleash a wave of creativity and productivity but poses important questions for humanity

Picture a world where machines are artists, storytellers, or even economists producing content that imitates human intelligence. Alan Turing, the pioneering computer scientist, first envisioned the possibility of machines reaching such levels of mastery in a 1950 paper. With ChatGPT and other so-called generative artificial intelligence tools, his prediction of an “imitation game” is now reality. It feels as if we’ve been catapulted into a universe once reserved for science fiction. But what exactly is generative AI?

GenAI represents the most impressive advance in machine-learning technologies yet. It marks a significant leap in AI’s ability to understand and interact with complex data patterns and is poised to unleash a new wave of creativity and productivity. But it also raises important questions for humanity. Key innovation milestones marked the path to its current sophistication.

In the 1960s, a program called ELIZA impressed scientists with its ability to generate human-like responses. It was basic and operated by set rules, but it was the precursor of what we now know as “chatbots.” Two decades later, artificial neural networks appeared. These networks, inspired by human brains, gave machines new skills, such as understanding the nuances of language and recognizing images. But a limited pool of data for training and inadequate computing power held back real progress. Remarkably, these twin resources kept doubling each year, setting the stage for the third wave of AI in the 2000s: deep learning.

Deep learning

With innovations such as Google Translate, digital assistants like Alexa and Siri, and the emergence of self-driving cars, machines started to understand and interact with the world. Yet for all this progress, a piece of the puzzle was still missing. Machines could assist and predict, but they couldn’t truly understand the intricacies of human conversation, and they were poor at generating human-like content.

Then, in 2014, generative adversarial networks (GANs) leveraged the ability of two competing neural networks to sharpen each other’s skills continuously. The “generator” created imitation data, text, or images, while the “discriminator” tried to differentiate between real and simulated content. This dual-network competition revolutionized the way AI understood and replicated complex patterns.

The last piece of the puzzle arrived in 2017 with a groundbreaking paper, “Attention Is All You Need.” By teaching the AI to pay attention to relevant parts of the input, it suddenly seemed that the machine started to get it—to grasp the essence of the input. This generative AI produced eerily human-like content, at least in labs.

Together, GANs and attention mechanisms, supported by ever-growing information and computing power, set the stage for ChatGPT—the most astonishing chatbot ever. It was launched by OpenAI in November 2022, and other big-tech firms soon followed with GenAI chatbots of their own.

Economics and finance

AI is not, of course, a new concept in economics and finance. Traditional AI (advanced analytics, machine learning, predictive deep learning) has been crunching numbers, gauging market trends, and customizing financial products for a long time. What sets GenAI apart is its ability to delve deeper and interpret complex data in a more creative manner. By dissecting intricate relationships between economic indicators or financial variables, it spits out not just forecasts but alternate scenarios, insightful charts, and even snippets of code that could significantly change how the sector operates.

The evolution from traditional to generative AI has introduced a new era of possibilities into both public and private spheres. Governments are beginning to employ these smarter tools to improve citizen services and overcome workforce shortages. Central banks are taking note, seeing in GenAI an enhanced capacity for sifting through vast amounts of banking data to refine economic forecasts and better monitor risks, including fraud.

Investment firms are turning to GenAI to detect subtle shifts in stock prices and market sentiment, drawing from a larger body of knowledge to propose more creative options, paving the way for potentially more lucrative investment strategies. Meanwhile, insurance companies are exploring how generative models can create personalized policies that align more closely with individual needs and preferences.

GenAI creations can be so convincing that they create a false sense of reality. This has the potential to spread misinformation, incite panic, and even destabilize economic or financial systems.

GenAI is evolving at a breakneck pace, pushing the boundaries of AI capabilities in economics and finance and introducing novel solutions to old challenges. Some people are skeptical. They say that, like a stochastic parrot, AI can create nonsensical and untrue facts, a phenomenon called “hallucination,” and it doesn’t really know the meaning behind the words. ChatGPT’s knowledge, they point out, is limited to its latest training date. Possibly. But given the mind-boggling pace of innovation, how long will these arguments remain relevant?

Still, the initial excitement surrounding GenAI has given way to growing and genuine concerns. Traditional challenges associated with AI, such as the amplification of existing biases in training data, or the lack of decision transparency, have taken on renewed urgency. New concerns have also arisen.

AI weaponized

One particularly alarming risk is GenAI’s remarkable ability to tell stories that resonate with individuals’ preexisting beliefs and viewpoints, potentially reinforcing echo chambers and ideological silos. Malicious actors can leverage this ability not only through the written word: in March 2022, an AI-generated video purported to show Ukrainian President Volodymyr Zelenskyy surrendering to Russian forces. Such incidents demonstrate how GenAI can be weaponized to manipulate politics, markets, and public opinion.

Whether it’s a fabricated story, doctored image, or synthetic video, GenAI creations can be so convincing that they create a false sense of reality. This has the potential to spread misinformation, incite panic, and even destabilize economic or financial systems with unprecedented efficiency and intensity. It may not always be deliberate: machines may spread misinformation unintentionally as a result of hallucinations.

The threat of AI is not limited to manipulation. Job displacement is another concern as GenAI continues to advance, potentially automating tasks that were previously performed by humans, leading to many job losses and requiring strategies for employment and retraining.

Earlier this year, leading AI experts, including ChatGPT’s creator, cosigned a letter warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” They were echoing concerns expressed decades earlier by Turing, who warned that “there is a danger that machines will eventually take control of our lives.”

We stand at a crossroads of technology and ethics. GenAI, with its vast promise and profound, existential questions, cannot be uninvented. As we leverage its transformative power, it’s imperative to remember Turing’s enduring counsel. GenAI is a monumental shift that demands vigilant oversight, new regulatory frameworks, and an unwavering commitment to ethical, transparent, controllable innovations that harmonize with human values.

HERVE is head of the IMF’s Digital Advisory Unit

Credit : Finance & Development

Leave a Reply