Disinformation and AI: The new era of cyber conflict
Security researcher Dan Woods reveals the dangers of AI-driven social media manipulation, its impact on elections, and why it may be more damaging than other existential cyber fears.
Bad actors use artificial intelligence systems to hack critical infrastructure, optimize and deploy ransomware, and generate hyper-realistic deep fake phishing scams.
Dan Woods, the Global Head of Intelligence at F5 Security, spoke with me at the Black Hat security conference about the rise of artificial intelligence cybersecurity. He explained that AI is intriguing because of the technology's potential for good and bad. Generative AI makes it easier for bad actors with low levels of technical sophistication to launch cyberattacks at large scale.
Woods noted that one alarming trend is using AI to generate code for criminal purposes. Even an unskilled programmer can now write a malicious script within an hour using AI like ChatGPT, making crafting tools for criminal activities easier. Additionally, the improvement in phishing emails and the automation of social engineering is a testament to AI's growing role in illegal activities.
"Everybody is rushing as quickly as they can to into the space, but I don't think anybody's going into it any faster than criminals are, and frankly, that raises many concerns," Woods said.
Woods noted that AI's potential to influence public options is more concerning. The use of fake social media accounts, for example, and the proliferation of disinformation and synthetic media could even change the outcome of elections, a threat that he believes is already in play.
As we approach the next general election in the United States, the conversation serves as a timely reminder that AI, for all its potential and promise, can also be a powerful weapon in the wrong hands. The challenge lies in educating the public and perhaps even regulating social media companies to mitigate these risks.
And despite the concerning scenarios, Woods emphasizes the importance of staying grounded in reality. Criminals often use only the lowest necessary level of sophistication for success, and AI helps organizations monitor threats and patch holes.