Bridging the digital safety divide

0

addressing inequities and safeguarding rights for online platform users in Africa

By Abigail ADU-DAAKO

In today’s interconnected world, access to digital platforms and products has become essential, enabling individuals to communicate, access information, and connect. As internet penetration increases in Africa and the global majority world, online platforms offer numerous opportunities for community building, networking, research, entertainment, as well as business and career growth – as seen with the proliferation of content creators and online businesses.



Thus, having an online presence is becoming increasingly necessary, especially in the developing world, where over 60percent of social media users reside. Social media usage has grown significantly in emerging markets in recent years, particularly in Africa – Kenya and Nigeria are currently among the top 10 countries with users who spend the most time on social media, at approximately four hours.

Unintended safety risks of online platforms

While there are many benefits to online platform presence and use, there are also several safety risks that come with it. As platforms expand and the number of users increases, concerns surrounding digital rights and safety equity have emerged – revealing disparities in user safety and rights protection – particularly for users in the global majority world.

Online communities in these regions often face disproportionate harm, including harassment, hate speech, and misinformation on online platforms. In Ghana and other African countries, cyberbullying, political disinformation, harassment, and sextortion are among the major risks that social media users often encounter online.

There are several direct or unintended effects these risks can have on a user’s well-being, whether it is mental, emotional, psychological, or even physical. For example, doxing – the public release of personal information (e.g., house address, full name) can lead to real-world threats and harm.

Targeted harassment and cyberbullying, including derogatory comments about a person, can cause anxiety and isolation, which can greatly impact the mental and emotional well-being of users. As the journalist and Nobel laureate Maria Ressa once said: “Online violence is real-world violence.” Thus, ensuring digital rights and safety equity is not only a matter of social justice but also a crucial step toward fostering an inclusive and equitable digital society.

Platform deprioritization of users in the global majority

Online platforms have historically deprioritized users in the developing world. This manifests in different ways – including inconsistencies in content moderation and policy enforcement, lack of nuances and local context in platform policies, and insufficient resources (e.g., limited language support). This leads to the misunderstanding of local and cultural contexts and nuances in content, which can cause the spread of harmful content without prompt intervention.

For example, Facebook (now Meta) has been accused by Amnesty International of failing to adequately moderate hate speech and harmful content during the Tigray conflict in Ethiopia, which supercharged the spread of harmful rhetoric. This has been attributed to a shortage of content moderators with an understanding of local languages and contexts resulting in harmful content proliferating unchecked. Political disinformation is another area where gaps are seen in Platforms’ policy enforcement and content moderation in the region.

During Kenya’s 2022 election, social media platforms were criticized for failing to curb the spread of misinformation, as their response was seen as slow and ineffective. The same can be said about the Nigerian 2023 elections, where there were several instances of websites spreading election-related disinformation on social media, which became the dominant narrative. All of these examples illustrate the impact social media platforms have on our communities when harmful content goes unchecked which undermines their safety and trust in these platforms.

Amplified impact on African communities

The impact of these harms is intensified and disproportionately affects users in African communities and underrepresented groups/societies due to the existing digital divide and socio-economic inequities, creating digital safety inequities. There are several reasons that these risks are usually amplified for users in the global majority.

Firstly, there is low digital literacy and awareness about existing tools and resources for protection/defense against online abuse. For example, on many online platforms and products, there is a range of resources and tools such as blocking, muting, restricting access/views, and reporting abuse as well as the community guidelines and policies that govern the platform.

Awareness of their existence and how to use them may not always be available or easy to understand, especially if they are only written in Western languages. This leaves users in these regions vulnerable and could cause under-reporting of abuse. There are also systemic economic and social disparities that make it challenging for many users in African communities to access resources to defend/protect themselves against online abuse, making them more vulnerable and easy targets for harassment.

Access to mental health resources to manage the impact of these harms and online abuse is also limited, exacerbating the damage. Biases in algorithms used in content moderation can inadvertently ignore harmful content or remove appropriate content due to a lack of context or nuance, leading to inconsistent application of policies.

Finally, there are usually inadequate legal protections or a lack of strong enforcement leaving users with minimal digital rights protections. These disparities in the protection and security afforded to users in the global majority exacerbate the digital safety divide and further put platform users in these regions at risk.

Platform accountability and solutions

There is a lot that online platforms can do to bridge the digital safety equity gap. Content moderation is one area where change is needed. Platforms should invest more resources in hiring and training human moderators who speak local languages and can understand the context, nuances, and subtexts to effectively detect and flag harmful content. Bad actors often adopt strategies such as humor, satire, sarcasm, etc., to spread disinformation and harmful content, and thus require someone who understands these nuances to capture them. Similarly, algorithms have to be trained on diverse datasets and languages to capture these nuances, while always keeping a human in the loop.

Additionally, platforms need to create policies that capture local contexts for efficient policy enforcement. They can go beyond high-level policies to customize policies to local contexts. For example, the definition of sexually inappropriate content could be expanded or restricted depending on the culture, religion, etc., of a specific society.

Definitions/categorizations of harm are usually based on Western definitions/constructs, which may or may not apply in other local contexts. Policies are also often translated into only Western languages, making them inaccessible to users in other countries. All of this leads to inconsistencies in how policies are enforced. Finally, Platforms increase education and awareness of policies and strengthen relationships with local and regional experts in these regions who can provide valuable insights into the unique challenges faced by users in Africa and the rest of the majority world.

Knowing your digital safety rights and protections

As users who frequently use these platforms and products, it is important to know how to protect yourself from abuse. Firstly, be aware of your digital rights as well as existing policies and regulations for digital safety and how to exert them. In Ghana, there are relevant laws such as the Data Protection Act, Cybersecurity Act, and the Electronic Communications Act with provisions that govern online content and behavior.

Also, find and familiarize yourself with the community guidelines and policies on online platforms as well as how to report harmful content. Knowing each platform’s appeal and feedback process is also useful to provide feedback and inform policy updates. Several civil society organizations such as the Ghana Internet Safety Foundation and the Media Foundation for West Africa provide resources and support to online abuse victims.

Finally, advocate for stronger laws and policies to protect user rights and safety online and hold platforms accountable. While some regulations and policies exist, their enforcement and effectiveness are still evolving. Thus it is important to engage with policymakers, civil society organizations, and platforms to ensure that the voices and perspectives of African communities are adequately represented in decision-making processes related to digital safety and rights.

Bridging the digital safety divide is not only a matter of equity but also of upholding fundamental human rights. Although all of this work requires resources, social media platforms have a responsibility to ensure that all users, regardless of their location, can use their platforms online safely and securely. Addressing safety inequities faced by users in the global majority world is necessary for fostering a more inclusive and equitable digital world.

>>>the writer is a Tech Policy professional with a Master in Public Policy degree from the University of California, Berkeley. She currently leads Trust and Safety policy work at the Wikimedia Foundation where she creates and updates platform policies for Wikipedia and over 12 other related products and platforms. She also advises product and engineering teams on safety risks for new features and products and collaboratively works on scalable solutions. Abigail also has a background in responsible AI and economic research and has worked for international organizations such as the World Bank.

Leave a Reply