Human error leaks underpinnings of Anthropic’s Claude Code to competitors, attackers, bad actors

Sponsor ad - 728w x 90h (at 72 dpi)

A worker for Anthropic, developer of the Claude generative AI platform, exposed elements of the platform that underlies the AI agent application called Claude Code.

It was the result of an error in posting an update to the GitHub software development resource site. The worker inadvertently instructed GitHub to generate a source map file that is used by developers for debugging and meant to be excluded from product packages such as this one.

Sponsor ad

After identifying thousands of copies in the wild, Anthropic quickly issued copyright takedown requests and the removal of instances of derivative Claude Code instructions from GitHub.

Anticipating that Anthropic would quickly take them down, one software developer used the leaked instructions to clone Claude Code’s functionality for use in other AI platforms.

Details of what was leaked

The leak exposed a number of proprietary processes that were internal to the Claude platform.  One technical analyst found mechanisms designed to compromise queries made by suspected attackers (e.g. competing AI platforms).  One of those injected fake information into query results that would be used to train generative AI models. Another returned results that were encrypted, so the user would see just a summary of the query result. Workarounds exist.

Another process was a mode that hides the AI that generates a query result, which would render a user to be unable to determine whether the result was generated by an AI or a human.

Marketing ploy?

Some analysts wondered whether the leaks were on purpose, as a marketing ploy: this leak revealed references to a future version of Claude code-named Capybara or Mythos.  A separate leak about this future version had occcurred five days earlier.

Why it matters

Anthropic has successfully raised multple rounds of private funding and an IPO is anticipated in 2026. Leaks such as this one call Anthropic’s ability to protect its IPR – and therefore, the company’s value – into question.

The leak revealed proprietary and sensitive Anthropic details that enable users to direct the use of the Claude platform.  Suddenly and unintentionally, any attacker who wanted to leverage the Claude platform to dispense disinformation or conduct cyberattacks could do so.

Anthropic has worked hard to differentiate itself from other generative AI platforms; most recently by discontinuing its Sora generative video platform out of concerns that it could be abused; costing Anthropic a billion dollars and a relationship with Disney.

Further reading

WTF, Anthropic’s Claude Code keeps track of every time you swear. Article. April 2, 2026. by Deni Ellis Belchard, Eric Sullivan. Scientific American

Anthropic races to contain leak of code behind Claude AI agent.  Article. April 1, 2026. by Sam Schechner and Robert McMillan. The Wall Street Journal

The Claude Code source leak: fake tools, frustration regexes, undercover mode, and more. Article. March 31, 2026. by Alex Kim. Alex Kim’s Blog.

The Great Claude Code Leak of 2026: Accident, incompetence or the best PR stunt in AI history? Article. March 31, 2026. by Varshith V Hegde. DEV Community

From our Sponsors