Discover more from Dan Patterson | News
Generative AI is training the next generation of hackers
Cybersecurity experts say "it's going to become very hard to tell what is generated by a machine and what is generated by a human."
I recently spoke with Etay Maor, a Senior Director of Security Strategy at Cato Networks and an expert in how threat actors exploit innovative technologies like generative artificial intelligence.
In the past, aspiring hackers spent countless hours refining their skills, researching targets, and trading tips on the dark web. Today, with AI, sophisticated cyberattacks can be initiated with ease. For example, hackers can deploy off-the-shelf generative AI -- ChatGPT and Google's Bard, for example -- to craft a phishing email tailored for a specific target audience. The AI ensures it’s customized for the target and written with perfect grammar and spelling, making it even more convincing.
Another alarming application of generative AI is in the realm of disinformation. Machines can produce content that appears human-like, making it challenging to distinguish between authentic and AI-generated content. The implications become even more profound when coupled with technologies that can produce deep fakes.
"It's going to become very hard, or it is already very hard to tell what is generated by a machine and what is generated by a human," said Maor. "And now, we have not just the generative AI in the sense of the [text] response, but you can generate images, videos, deep fakes ... you get what we're kind of referring to as almost a disinformation machine."
This is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.