Combating AI-Generated Deepfakes and Disinformation: Strategies to Restore Trust in Public Institutions
AI-enabled deepfakes and narrative attacks created by misinformation and disinformation can erode public trust in institutions, fragment society, and empower authoritarian regimes.
Last year, I delivered the keynote address about the dangers of AI-enabled deepfakes and disinformation at the European Broadcasting Union’s annual cybersecurity seminar. This blog post for my employer, Blackbird.AI, is an updated text version of that speech.
Policymakers, leaders of NGOs and nonprofits, CISOs, and media professionals, in particular, must confront the perils of AI-enabled narrative attacks caused by misinformation and disinformation. Unchecked, in the hands of agenda-driven bad actors, these new AI systems create schisms in our shared reality and threaten to rock the pillars of liberal democracy.
The World Economic Forum says AI-enabled misinformation and disinformation is now the top short-term global threat. Narrative attacks can create parallel realities and fracture societies by exploiting human biases, sowing confusion, and eroding trust in shared sources of truth. When false narratives spread unchecked, they can take root in echo chambers where they are reinforced and amplified, leading different population segments to believe in contradictory versions of reality. This splintering of the information landscape undermines the common ground necessary for constructive dialogue, compromise, and effective governance. As a result, societies can become increasingly polarized, with deepening divisions along political, ideological, and cultural lines. In this environment of distrust and disagreement over basic facts, the social fabric frays, leaving communities vulnerable to manipulation by bad actors seeking to further their agendas at the expense of the greater good.
Today, many CISOs and cybersecurity experts agree that cyberattacks are now inexorably linked to narrative attacks, misinformation, and disinformation. And that mitigating false narratives can be a more significant and complex challenge than recovering from a cyberattack. Advanced AI systems like GPT-4 from OpenAI can now generate human-like text on demand for any topic. While aiding creators and lowering barriers to content production, this also means propagandists and bad actors can potentially “mass-produce” fake news articles, harmful social media posts, comments, and more to advance their agendas. Coupled with the hyper-personalization enabled by big data, micro-targeting groups with custom-tailored disinformation at scale is now possible.
If used irresponsibly, these systems could drown the online information space in a tsunami of false narratives and misleading distortions, overpowering the voices of credible journalists and expert institutions. As 404 Media recently noted, this risks undermining public trust in real news and dividing societies through polarization from different sets of AI-generated “facts.”