ICT Insight with Institute of ICT Professionals: Misinformation and disinformation in the digital age: A threat to democracy

0

By Abraham Fiifi SELBY

In the digital age, the rapid dissemination of information has transformed the way we communicate, access news, and engage with the world. However, this unprecedented flow of information has also given rise to a dangerous phenomenon: the spread of misinformation and disinformation.

These twin threats have become significant challenges to democratic societies, undermining trust in institutions, distorting public discourse, and influencing political outcomes. As we navigate this complex landscape, the need for regulation, fact-checking, and media literacy has never been more urgent.

The rise of social media has amplified the reach and impact of misinformation and disinformation. Platforms like Facebook, WhatsApp, X (Formerly Twitter), and YouTube have become breeding grounds for false narratives, conspiracy theories, and propaganda.

The algorithms that drive these platforms prioritize engagement, often promoting sensational or polarizing content over factual information. This creates an environment where falsehoods can thrive, and the truth can be drowned out.

Understanding Misinformation and Disinformation

Misinformation refers to false or inaccurate information that is shared without malicious intent. It often spreads rapidly due to the ease of sharing content on social media platforms.

Disinformation, on the other hand, is deliberately created and disseminated to deceive, manipulate, or cause harm. Both forms of false information can have devastating consequences, particularly in the political arena.

The Threat to Democracy

Democracy relies on an informed citizenry capable of making rational decisions based on accurate information. Misinformation and disinformation undermine this foundation by spreading falsehoods, sowing confusion, and eroding trust in democratic institutions. When citizens are unable to distinguish between fact and fiction, the very fabric of democracy is at risk.

African elections faced parallel challenges, with private mercenary groups like “Team Jorge” conducting disinformation operations in over 20 countries since 2015 (ACSS, 2024). Some Key patterns emerged which were three times more disinformation campaigns in nations without presidential term limits.

There were 78% increase in journalist harassment under guise of “anti-fake news” laws and also coordinated inauthentic behavior targeting youth voters through meme warfare as stated by the Africa Center for Strategic Studies in 2024 report.

The Role of Ghost and Fake Accounts in Spreading Misinformation and Disinformation

One of the most insidious tools in the act of those who spread misinformation and disinformation is the use of ghost and fake accounts. These accounts, often created with false identities, are designed to manipulate public opinion, amplify divisive narratives, and even harass individuals. What makes them particularly dangerous is their ability to operate under the radar, masquerading as real users while spreading falsehoods or exploiting personal data.

One of the challenges is that these platforms rely heavily on algorithms to detect suspicious activity. However, malicious actors are constantly evolving their tactics to evade detection. For example, they may use AI-generated images or deepfake technology to create more convincing fake profiles. They may also employ tactics like “astroturfing,” where they create the illusion of widespread grassroots support for a particular cause or candidate.

The Need for Regulation and Fact-Checking

While fact-checking organizations have expanded capacity by 300% since 2020 (Kyriakidou, M. et al. (2022), there are limitations which persist. Although the correction acceptance rates dropped to 22% when contradicting partisan beliefs, 68% of users never see fact-checks of content they originally consumed. Also, generative AI enables “hydra” disinformation – debunk one claim, three emerge, as cited by Darrell M. West on the Brookings 2024 Publication.

Addressing the threat of misinformation and disinformation requires a multi-faceted approach. Regulation, fact-checking, and media literacy are essential components of any strategy to combat false information.

  1. Regulation: Governments and technology companies must work together to establish clear guidelines and regulations to curb the spread of false information. This includes holding social media platforms accountable for the content they host and ensuring transparency in their algorithms. Regulations should also address the use of bots and fake accounts, which are often used to amplify disinformation campaigns. However, regulation must be carefully balanced to avoid infringing on freedom of speech. Striking this balance is a complex challenge, but it is essential to protect democratic values while combating false information.
  2. Fact-Checking: Independent fact-checking organizations play a crucial role in debunking false information and providing the public with accurate and reliable information. Collaborations between fact-checkers and social media platforms can help flag false content and reduce its spread. Additionally, promoting fact-checking initiatives and making them more accessible to the public can empower individuals to critically evaluate the information they encounter.
  3. Media Literacy: Educating the public about media literacy is a long-term solution to the problem of misinformation and disinformation. By teaching individuals how to identify credible sources, recognize bias, and verify information, we can build a more resilient society capable of resisting false narratives. Media literacy should be integrated into school curricula and public awareness campaigns to ensure widespread adoption.

As former EU Commissioner Věra Jourová noted: “Democracy cannot be a spectator sport in the digital age. Protecting truth requires both technological vigilance and renewed civic commitment.” (Kyriakidou, M. et al. (2022). The path forward demands not just better policies, but a fundamental reimagining of our relationship with information in the political sphere. The figure below Shows pathways for tackling misinformation and disinformation.

Figure 1.0 Pathways to tackle misinformation and disinformation

The Role of Technology, Innovation and Pathways Forward

Technology has been both a driver of the misinformation problem and a potential solution. Artificial intelligence (AI) and machine learning can be leveraged to detect and flag false information in real-time. For example, AI algorithms can analyze patterns of disinformation campaigns and identify suspicious accounts or content. However, these technologies must be used responsibly and transparently to avoid unintended consequences.

Effective solutions require multi-stakeholder collaboration:

The table outlines key strategies and their corresponding implementation approaches for addressing digital challenges through multi-stakeholder collaboration. It emphasizes that effective solutions involve joint efforts across sectors:

Strategy

Implementation

Platform Accountability

EU-style co-regulation

Media Literacy

National education programs

Technological Solutions

AI detection tools

Legal Frameworks

Updated laws

Conclusion

Misinformation and disinformation pose a significant threat to democracy in the digital age. The rapid spread of false information undermines trust in institutions, distorts public discourse, and influences political outcomes.

To address these challenges, we must adopt a comprehensive approach that includes regulation, fact-checking, and media literacy. By working together, governments, technology companies, and civil society organizations can build a more informed and resilient society capable of safeguarding democratic values.

Key References:

  • Darrell M. W (2024), How disinformation defined the 2024 election narrative
  • Kyriakidou, M. et al. (2022) ‘Questioning Fact-Checking in the Fight Against Disinformation: An Audience Perspective’, Journalism Practice, 17(10), pp. 2123–2139. doi: 10.1080/17512786.2022.2097118.
  • Africa Center for Strategic Studies (2024) Mapping a Surge of Disinformation in Africa

About the Author

Author: Abraham Fiifi Selby | PG, UCL, School of Public Policy, UK | Digital Analyst | Member, IIPGH | Internet Society (Ghana Chapter), ICANN, EGIGFA

This Publication is written in author’s own capacity, not affiliated with any of the membership organization.

Email: [email protected] or [email protected]