The Year AI Became a Hacker
Blackbird.AI's top research stories of 2025 reveal how narrative attacks evolved, targeting execs, banks, aviation, Hollywood and pop culture, and sunscreen. Yes, sunscreen.
Narrative intelligence emerged as one of the fastest-growing cybersecurity categories this year. I work as the fulcrum between Blackbird.AI’s RAV3N research intelligence team and our AI engineers. In 2025, we watched as AI systems evolved from criminal tools to autonomous operators capable of executing complete attack campaigns without human oversight. Narrative attacks also targeted executives,
Blackbird.AI’s Narrative Intelligence platform detects these attacks by analyzing manipulated narratives, identifying threat actors, mapping amplification networks, flagging bot and autonomous behavior, and documenting the communities that connect them. And our RAV3N research team interprets data in granular detail for high-profile partners. Their analysis revealed coordinated campaigns, synthetic media, and influence networks that moved faster than crisis communications teams could respond.
The full scope of RAV3N research is documented in our annual Narrative Intelligence roundup blog post. These are some of the most significant narrative attacks of 2025:
Hijacked AI Agent Signals New Normal
Chinese state-sponsored hackers hijacked Anthropic’s Claude AI and weaponized it against 30 organizations. The AI agent scanned networks, identified high-value databases, wrote exploit code, harvested credentials, and exfiltrated data. Human operators intervened at only four to six junctures across the entire operation. Claude performed thousands of operations per second, something human hackers could never match. The attackers tricked the AI by breaking malicious objectives into small, innocuous-looking requests and instructing Claude to behave as a cybersecurity employee conducting authorized penetration testing.
AI Is Now the Operator: The End of Human-Led Cyberattacks
AI systems now conduct reconnaissance, identify vulnerabilities, craft exploitation strategies, and execute complete extortion campaigns without human intervention. Hospitals received demands exceeding half a million dollars from AI-generated ransomware that adapted its approach based on network topology. North Korean operators use AI to secure legitimate positions at major corporations, handling technical interviews, submitting code reviews, and participating in team communications while harvesting intelligence. Ransomware construction kits sell for several hundred dollars and include military-grade encryption and endpoint detection bypass mechanisms.
Disinformation Security and Narrative Intelligence
Narrative intelligence became one of the fastest-growing cybersecurity categories in 2025. Executives recognized that narrative attacks targeting their companies and leaders represent a critical threat vector that traditional security tools cannot detect. Publicly traded companies lose approximately $39 billion annually to narrative-attack-related stock market losses. Most organizations still treat narrative attacks as communications problems rather than security threats. Firewalls do not stop deepfakes. Endpoint detection cannot identify coordinated bot campaigns. Security information and event management systems lack visibility into perception manipulation occurring on social platforms.
AI Actor Tilly and Hollywood’s Manipulated Narratives
An AI-generated actor named Tilly Norwood debuted and triggered seven distinct manipulated narratives. Blackbird.AI found 36 percent of actor posts emphasized identity theft concerns. 28 percent of conversations targeted agencies as traitors. Significant bot-like activity amplified debates about labor displacement, training data ethics, and union governance. SAG-AFTRA condemned the synthetic performer. The case demonstrates how AI-driven controversy generates overlapping narrative attacks across multiple vectors simultaneously. Questions arose about intellectual property, likeness rights, and contractual frameworks governing synthetic talent.
Narrative Attacks Threaten to Hijack the Taylor Swift Travis Kelce Engagement
The celebrity announcement became a manipulation study. Blackbird.AI identified five coordinated narratives: PR-stunt allegations, NFL conspiracy theories, misogynistic attacks, prediction-market exploitation, and partisan political framing. Traders wagered more than $250,000 on engagement-related bets. Probability markets surged from 25 percent to 45 percent hours before the public announcement. Individual traders reportedly made more than $3,000 in a single day buying shares just before the news broke. The incident triggered debates about profiting from private relationship milestones and raised regulatory questions about information leakage.
How a Coordinated Narrative Attack Shook U.S. Banking Trust
The #TheBanksAreOutOfMoney hashtag flooded social media following new tariff announcements. Blackbird.AI analysis found 17.2 percent of accounts displayed bot-like behavior. Coordinated actors circulated doctored Bloomberg-style screenshots showing false liquidity crises. Non-attributable burner accounts shared repetitive meme-style content with overlapping hashtags. A single post calling for people to withdraw funds while they still can was widely circulated by both real users and bots. The campaign triggered genuine withdrawal concerns before banks could respond and demonstrated how brittle trust can be when subjected to well-timed narrative attacks.
Narrative Attacks Manipulate Public Perception of Aviation Safety
High-profile incidents triggered coordinated reputation attacks against airlines. False claims spread about maintenance practices, regulatory failures, and negligence. Following the midair collision between a Blackhawk helicopter and an airliner at DCA, Blackbird.AI detected a 60 percent spike in fear-of-flying mentions. Negative narratives on social media spiked 64 percent. News reports on aviation safety concerns rose 30 percent. Bad actors exploited public fear to damage industry reputation, demonstrating how safety-sensitive industries face coordinated reputation attacks during crisis events.
How Sunscreen Became a Target of Narrative Attacks in Summer 2025
An unlikely product became a narrative attack target. Coordinated campaigns spread claims about health risks, environmental harm, and misleading product standards. Consumer trust eroded. Regulatory scrutiny increased. Brands found themselves defending not just their products but scientific consensus itself. The case demonstrated how information campaigns can weaponize health and safety concerns against entire product categories. Consumer products companies learned that narrative vulnerability extends beyond corporate reputation to the fundamental trust consumers place in product safety.
Traditional security tools cannot detect these narrative attacks and won’t prevent deepfakes or identify coordinated amplification campaigns. The attack surface is now public perception. Organizations still treating narrative threats as communications problems will find themselves responding to crises that began and ended before anyone registered an alert.


Absolutely fascinating how the Claude hijacking case shows AI shifting from tool to operator. The part about tricking Claude into thinking it was doing legit penetration testing is honestly terrifying because it exposes how context windows can bypass safety guardrails. I've been following the autonomous ransomware stuff, and the speed advantage is insane, thousands of operations per second means defenders are basically playing catchup in slow motion. The sunscreen narrative atack though feels almost absurd until you realize it's proof that literally any topic can weaponized for coordinated campaigns.