Navigating journalism in the disinformation age: The growing threats of AI and deepfake technology
Last week I delivered the keynote address to the European Broadcasting Union’s cybersecurity seminar.
Last week, I delivered the keynote address to the European Broadcasting Union’s cybersecurity seminar in Geneva. The speech video is here.
The lede: Unchecked disinformation can paralyze public knowledge, divide societies, and enable authoritarians — but solutions exist.
Emerging technologies like deepfakes and AI text, audio, and image generators have reached astonishing new heights in mimicking human content and media. But unrestrained, these systems also enable an unprecedented large-scale proliferation of misleading narratives and disinformation. In this pivotal moment, it is crucial for journalists, in particular, to confront the existential dangers posed to democracy by technologically enabled disinformation and to help chart feasible solutions.
Some solutions:
Industries using AI for content must exercise extreme due diligence, consider risks, and implement safeguards. They must also not underestimate the potential for abuse.
Technologists must prioritize developing forensic tools to efficiently detect fake images, videos, audio, and text. This will empower fact-checkers and content moderators.
Governments should explore regulations specifically around malicious uses of deepfakes, such as for political or fraud purposes. Freedom of speech vs. public harm must be weighed.
News organizations and tech platforms should ally against AI disinformation, collaborating on headlines, debunking, labeling, and downranking false content.
Journalists must double their transparency, ethics, and accuracy efforts to retain public trust as other sources proliferate. Don't underestimate the continued value of real investigative work.