By Elijah ESHUN
To borrow a question from a famous scholar: What is ‘artificial’ about intelligence? Since its inception, artificial intelligence (AI) has raised critical questions about whether it will complement human activity or eventually replace it.
While this debate continues, one thing is certain; AI is here to stay. Whether complementing human efforts or automating entire sectors, its impact will be profound.
The danger is not only that individuals may lose jobs or privacy but that entire populations may face life-threatening consequences if left unprepared.
Among the most at risk are refugees, asylum seekers, and displaced persons – people already on society’s margins. As a friend and analyst once remarked, “The stakes are low when a Netflix algorithm recommends an inappropriate movie to a child. But they are dangerously high when an algorithm incorrectly denies asylum to someone fleeing war.”
AI, when poorly designed or improperly applied, can reinforce biases, deny rights without explanation, or make decisions that are difficult to appeal. Without safeguards, we risk automating exclusion and entrenching injustice.
This bare truth highlights a critical oversight in the global AI conversation: the disproportionate risks faced by those with the least access to digital tools and knowledge.
Imagine this: if the average person fears job loss due to AI illiteracy, then for stateless or displaced individuals, AI ignorance could cost their lives.
In today’s humanitarian operations, AI tools are increasingly used to screen asylum claims, translation, predict displacement patterns, verify identities, and allocate aid. While these systems can improve efficiency, they also pose profound risks.
Displaced individuals often lack the digital literacy, language fluency, or legal support needed to understand how such tools impact their rights and futures. AI illiteracy creates a silent and lethal barrier.
It could easily become the next layer of global inequality. Without the ability to understand or challenge AI decisions, vulnerable individuals risk exclusion, exploitation, and further marginalization.
Already, digital illiteracy has made many refugees vulnerable to fraud. The United Nations High Commissioner for Refugees (UNHCR) has documented instances where displaced people were tricked into paying for fake resettlement schemes or misled by false information about aid programmes. These are level-one threats.
But as AI systems become more embedded in refugee processing and decision-making, these threats will escalate if nothing is done.
To address this, governments, international organizations, and civil society must act decisively. Here are four key policy recommendations:
Policy recommendations
- Launch AI literacy and digital education programmes for refugees: Humanitarian agencies must integrate digital literacy and AI awareness training into refugee assistance programmes. This will help displaced persons understand how algorithms may affect their rights and access to services.
- Establish independent ethical oversight bodies: Governments and international organizations should create independent committees to monitor and audit AI systems used in asylum and humanitarian contexts. These bodies must ensure fairness, accountability, and appeal mechanisms for affected individuals.
- Design human-centered and transparent AI systems: Developers and agencies must prioritize ethical design principles. This includes using explainable AI models, providing human oversight, and ensuring decisions are communicated in accessible ways that reflect cultural and contextual nuances.
- Strengthen legal frameworks against AI-enabled fraud and exploitation: Enact and enforce laws to protect displaced populations from digital scams, unauthorized data collection, and AI misuse. These should include penalties for fraudulent schemes targeting refugees and regulations on the responsible use of biometric and predictive technologies.
Conclusion
In a world racing toward AI integration, it is morally irresponsible to leave behind those already on the margins.
Refugees and displaced persons must not become digital collateral damage. Governments and humanitarian organizations must act now to bridge the AI literacy gap, implement ethical safeguards, and prevent the next generation of digital inequality.
If AI is to be a force for good, then its promise must be inclusive, humane, and just. The cost of inaction is not just technological failure; it is human tragedy.
>>>the writer is a digital and AI policy analyst and researcher