Four critical AI vulnerabilities are being exploited faster than defenders can respond
From prompt injection to deepfake fraud, threat actors weaponize the same capabilities that make AI systems useful. Security researchers say several flaws have no known fix.
I recently published a story on ZDNet about AI hacking that examines four critical vulnerabilities threatening AI systems across industries and explains why most have no known fixes.
Autonomous AI agents are being hijacked to conduct cyberattacks without human intervention. In September, Anthropic disclosed that Chinese state-sponsored hackers weaponized its Claude Code tool to autonomously conduct reconnaissance, write exploit code, and exfiltrate data from approximately 30 targets. Security researcher Bruce Schneier put it plainly: we have zero agentic AI systems that are secure against these attacks.
Prompt injection, identified as a vulnerability several years ago, remains a significant architectural challenge for many modern AI systems, with few solutions. A recent study found that a significant percentage of attacks succeeded across all tested large language models, with larger and more capable models performing no better than smaller ones. There is no parameterized query equivalent for AI, the way there is for SQL injection. The flaw is structural.
Training data can be poisoned for as little as $60, with just 250 corrupted documents enough to backdoor any LLM regardless of its size. Unlike prompt injection, which exploits a model at inference time, data poisoning corrupts the model itself. The backdoor may already be sitting in production systems, dormant until triggered.
Deepfake fraud has already stolen tens of millions of dollars. A finance worker at Arup wired $25.6 million after a video call during which everyone on screen, including his CFO, appeared to be AI-generated. Detection technology can help, but the problem scales as the technology becomes more sophisticated and lower-cost. And recent research found that humans correctly identify high-quality deepfakes only 24.5% of the time.
The common problem across all four threats is that the capabilities that make AI useful are the ones attackers exploit. Organizations are deploying systems with fundamental flaws while threat actors are already weaponizing them at scale, and the gap between adoption and security is only widening.

