In mid-September 2025, the AI company Anthropic – developer of the Claude LLM AI platform – detected a sophisticated cyber espionage operation conducted by a Chinese state-sponsored group. Anthropic refers to it as GTG-1002, and says it represents a fundamental shift in how advanced threat actors use AI.
An internal Anthropic investigation found a professionally coordinated operation that involved multiple simultaneous targeted intrusions. Anthropic reported that the threat actor manipulated Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations largely autonomously.
The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.
Multi-phased operation
The diagram summarizes the nature of the Anthropic GTG-1002 attack. Human operators began the attack campaign by selecting targets against which it would conduct autonomous research. The targets included technology compnaies, financial institutions, chemical companies and government agencies in multiple countries.

Subsequent phases included reconnaisance and attack surface mapping, vulnerability discovery and validation, credential harvesting and lateral movement, data collection and intelligence extraction, and documentation and handoff to other attack processes. Attacks were fully autonomous.
Precursor cases
Earlier in 2025, Anthropic had documented intrusions to its platform via compromised VPNs (a case called GTG-2002) to conduct attacks using coded agents to execute operations on the victim network, but those were orchestrated by human operators. In this current case, the exploits were largely automated.
Another use-case of weaponized AI was reported in September 2025 by New York University, which said that a prototype simulation model developed by NYU’s Tandon School of Engineering “carried out all four phases of ransomware attacks — mapping systems, identifying valuable files, stealing or encrypting data, and generating ransom notes — across personal computers, enterprise servers, and industrial control systems.”
That model, called “Ransomware 3.0” consumed approximately 23,000 AI tokens per complete attack execution, equivalent to roughly $0.70 using commercial API services.
Why this matters
Anthropic summarized the impllications of the attack as demonstrating that barriers against cyberattacks are porous. Threat actors can use agentic AI systems to do the work of entire teams, analyzing targets, producing code, and then accessing, scanning and exfiltrating data with great efficiency.
Anthropic said that GTG-1002 was the first documented case of a cyberattack largely executed without human intervention at scale; successfully obtaining access to confirmed high-value targets for intelligence collection, including major technology corporations and government agencies.
Keep in mind that this was the first reported instance of an automated attack campaign by an AI platform, against another AI platform. As AI is a relatively new technology category, whose protections may lack the experience-based rigor seen with mature technology categories, it is entirely possible that automated cases like this have happened elsewhere.
Not to be confused
Other cases of AI exploitations have emerged in recent news. But the Anthropic case is different, as it was an orchestrated cyberattack by a nation-state actor, and not a case of commercial exploitation that some may say is rooted in questionable ethics.
As one example of a commercial case, Amazon sued Perplexity in November, accusing Perplexity of accessing private Amazon customer account data using Perplexity’s Comet agentic AI browser to place automated orders with Amazon. Not that Amazon doesn’t have similar intentions, as Amazon is working on AI-driven “Buy for me” and AI assistants to make recommendations. But these do not rise to the level of the Anthropic cases.
Further reading
Disrupting the first reported AI-orchestrated cyber espoinage campaign. Summary article. November 13, 2025. Anthropic
Disrupting the first reported AI-orchestrated cyber espoinage campaign. Full report. November 17, 2025. Anthropic
Large language models can execute complete ransomware attacks autonomously, NYU Tandon research shows. Article. August 28, 2025. New York University Tandon School of Engineering
Threat Intelligence Report: August 2025. Report (containing details about GTG-2002, a human-directed attack on Anthropic’s Claude platform). August 2025. Anthropic
Amazon sues AI startup over browser’s automated shopping and buying feature. Article. November 5, 2025. The Guardian (UK)










