Discover more from Dan Patterson | News
Democracy and Journalism in The Age of AI-Enabled Disinformation
Unchecked, disinformation can paralyze public knowledge, divide societies, and enable authoritarians — but solutions exist.
Last month, I delivered the keynote address to the European Broadcasting Union’s annual cybersecurity seminar about the dangers of AI-powered deepfakes and disinformation. The speech video is here (login required); the text version is below.
Emerging technologies like deepfakes and AI text, audio, and image generators have reached astonishing new heights in mimicking human content and media. But unrestrained, these systems also enable an unprecedented large-scale proliferation of misleading narratives and disinformation. In this pivotal moment, it is crucial for journalists, in particular, to confront the existential dangers posed to democracy by technologically enabled disinformation and to help chart feasible solutions.
This is a reader-supported publication. To receive new posts and support my work, consider becoming a subscriber.
Many CISOs and cybersecurity experts agree that cyberattacks are now inexorably linked to narrative attacks, misinformation, and disinformation. And that mitigating false narratives can be a more significant and complex challenge than recovering from a cyberattack. Advanced AI systems like GPT-4 from OpenAI can now generate human-like text on demand for any topic. While aiding creators and lowering barriers to content production, this also means propagandists and bad actors can potentially "mass-produce" fake news articles, harmful social media posts, comments, and more to advance their agendas. Coupled with the hyper-personalization enabled by big data, micro-targeting groups with custom-tailored disinformation at scale is now possible.
Left unchecked, such systems deployed irresponsibly could fill the information space with a suffocating amount of false narratives and misleading distortions, overwhelming the voices of credible journalists and expert institutions. This risks severely undermining public trust in real news and dividing societies through polarization from different sets of "AI-generated facts."
And it's not just text - image, video, and audio generation has advanced rapidly with models like DALL-E 2, Imagen, and Google's MusicLM. Like "cheap fakes" today, these systems don't need to be perfect. It is just good enough, rapid, and shocking to shape first impressions and beliefs, which are stubbornly hard to change even when proven false.
The result may be mass confusion about what's real, an inability to have shared truths and solve collective problems, and fertile ground for authoritarians to seize power. This is the dangerous future we may face if AI disinformation goes unchecked.
Deepfake Disinformation Danger
Deepfakes represent an especially alarming AI disinformation threat - forging convincing video or audio of high-profile people saying or doing things they never actually did. Enabled by generative adversarial networks (GANs), deepfakes have rapidly improved from blurry faceswaps to lifelike forgeries that are extremely difficult to detect.
Imagine a video of a politician accepting a bribe, contemptuous remarks from a celebrity, or false orders from a general - all synthesized by AI with no evidence it ever happened. Even proven wrong after the fact, deepfakes can ruin careers and reputations through initial believability and sensationalism.
Bad actors also produce shallow or "cheap" fakes - simple edits like slowing down or splicing clips to remove context. These don't require sophisticated AI but can similarly devastate targets.
The ability to fabricate events while maintaining plausibility puts a dangerous power in the hands of the corrupt, who will face no limits on destroying opponents. Like fake text, flooding media with deepfakes could paralyze the public's ability to discern truth from fiction.
In early 2022, for example, Facebook and Twitter removed a deepfake video of Ukrainian President Volodymyr Zelensky calling on his soldiers to surrender, highlighting the growing threat that artificial intelligence tools can manipulate faces and voices to create fake media. While Zelensky's government was able to debunk this video quickly, experts warn that deepfakes are becoming more sophisticated and could be used to spread misinformation and sow public discord. Social media platforms need help detecting deepfakes and containing their impact. Combating the malicious use of deepfakes and cheap fakes will require new laws, improved forensic tools, greater public awareness, and a more critical assessment of media authenticity by individuals.
Impacts on Journalism
Trust erosion poses an existential threat to journalism and democracy. If the public cannot believe their eyes and ears, determining truth from fiction becomes nearly impossible. This breakdown of shared reality undermines constructive debate, problem-solving, and accountability. It creates divisions ripe for exploitation by bad actors seeking power as more voices produce AI-synthesized content, and actual journalism risks becoming irrelevant, unable to resonate amidst the distorted noise. The institutions charged with transparency in the public interest, like quality news outlets, face losing authority and influence.
Without renewed public trust in facts, evidence, and ethical reporting, society grows vulnerable to manipulation and "post-truth" authoritarianism. Urgent action is required to avoid this dystopian AI disinformation future where democracy and truth themselves unravel.
Constructive debate and policymaking break down if citizens cannot agree on basic facts and realities, creating divisions that bad actors can exploit. People may tune out entirely from the cacophony of disinformation, retreating to fringe sources that confirm their biases.
These effects will only snowball over time as fake content acquires plausibility through repetition across media channels. Even debunking can backfire by driving more attention to false claims. The institutions charged with upholding truth and transparency in the public interest - journalism included - face the prospect of becoming irrelevant, distrusted relics in the minds of a misled populace.
Solutions to Maintain Public Trust
With democracy and truth itself under siege, what can be done to avoid the AI disinformation dystopia? Though the threat is daunting, we can overcome it through awareness, innovation, cooperation, and ethical standards. Some beginning steps include:
Industries using AI for content must practice extreme due diligence, considering risks and implementing safeguards. Don't underestimate the potential for abuse.
Technologists must prioritize developing forensic tools to detect fake images, videos, audio, and text efficiently. This will empower fact-checkers and content moderators.
Governments should explore regulations specifically around malicious uses of deepfakes, like political or fraud purposes. Freedom of speech vs. public harm must be weighed.
News organizations and tech platforms should ally against AI disinformation, collaborating on headlines, debunking, labeling, and downranking false content.
Journalists must double their transparency, ethics, and accuracy efforts to retain public trust as other sources proliferate. Don't underestimate the continued value of real investigative work.
Readers must become more discerning, leaning on trusted brands and learning skepticism when evaluating news and information online. Check sources and evidence.
We must have an open societal conversation on preparing for technological change while retaining truth, ethics, and democracy. There are no easy answers, but complacency is not an option.
Averting an AI Misinformation Catastrophe
The rise of synthetic media represents a turning point for civilization. Like nuclear power, AI can bring both significant progress and existential risk. We must confront the dangers directly, with open eyes, sincere hearts, and innovative minds. We can adapt and thrive by taking the threat seriously, implementing ethical safeguards, empowering journalists and citizens with technology, and cooperating across industries.
There will be setbacks along the way - technologies misused, harm inflicted, trust eroded. But we cannot allow pessimism to blind us to how close society is to a breakthrough. With compassion and courage, a brighter future awaits, where these technologies uplift human potential rather than undermine it.
The time is now to steer this conversation and the future toward the light. There is wisdom enough if we listen. There is sufficient time if we act. Powerful forces will align with progress and justice.