A major internal mishap at Anthropic has resulted in the unintended exposure of a large portion of its AI coding system, Claude Code. The incident occurred when version 2.1.88 was published with a source map file that allowed reconstruction of the original codebase. The leak quickly spread online and is now effectively impossible to contain, according to the source :contentReference[oaicite:1]{index=1}.
How did the Claude Code leak happen?
The issue stemmed from a packaging error during deployment. A 59.8MB source map file—intended only for debugging—was mistakenly included in the public release. This file enabled reconstruction of approximately 512,000 lines of code across nearly 1,900 files, exposing core parts of the system.
Who discovered and spread the leak?
The leak was first identified by researcher Chaofan Shou, who shared it online, triggering massive attention. Within hours, millions had accessed the data. Although Anthropic quickly removed the package, copies had already been widely mirrored across the internet.
What was revealed inside the code?
The leaked files exposed detailed internal architecture, including LLM orchestration systems, multi-agent coordination, permission layers, OAuth mechanisms, and dozens of hidden feature flags. Notably, systems like “Kairos,” which manages long-term memory, and “Buddy,” an AI companion with gamified traits, were uncovered.
Why couldn’t the leak be contained?
Attempts to remove the code using DMCA takedowns were largely ineffective. Mirrors quickly appeared across multiple platforms, including decentralized repositories, making it nearly impossible to fully erase the content from the internet.
What role did developers play after the leak?
Some developers responded by creating “clean-room” rewrites of the leaked system. One such project translated the architecture into Python, gaining massive popularity in a short time and avoiding direct copyright violations.
What are the legal implications?
The situation raises complex legal questions, particularly regarding AI-generated code and copyright ownership. If parts of the original system were created by AI itself, enforcing intellectual property rights may become significantly more difficult.
Why does this incident matter?
Beyond the immediate controversy, the leak highlights the challenges of controlling information in a decentralized digital environment. Once sensitive data spreads across multiple platforms, especially decentralized ones, it becomes virtually permanent.
