Anthropic Accidentally Exposes Source Code for Claude Code
Anthropic exposed 500,000+ lines of Claude Code source via npm, risking intellectual property loss and enabling attacker exploitation, with rapid code mirroring on GitHub.
- On Tuesday, March 31, 2026, Anthropic accidentally published Claude Code's source code to the public npm registry after uploading original code instead of finished versions.
- A misconfigured JavaScript map file intended for internal debugging inadvertently exposed more than 512,000 lines of code spanning roughly 1,900 TypeScript files in a zip archive.
- Security researchers discovered internal features including 'KAIROS' autonomous daemon logic, 'Undercover Mode' for stealth contributions, and a sophisticated 'Self-Healing Memory' architecture that competitors can now study.
- The leak provides competitors a blueprint for building similar agents, potentially challenging Claude Code's $2.5 billion annualized revenue and leveling the playing field for agentic AI development.
- Anthropic stated the incident was 'human error, not a security breach,' while the leak also exposed details of the unreleased Capybara model, a planned successor to Opus.
138 Articles
138 Articles
House Democrat pushes Anthropic on safety protocols, source code leak
Rep. Josh Gottheimer (D-N.J.) pressed Anthropic on Thursday about recent changes to its internal safety protocols following reports that part of the source code for the AI firm’s Claude Code tool was accidentally leaked. The company narrowed its AI safety policy pledge in late February, removing a previous commitment to halt development of its AI…
Lawmaker Probes Anthropic Over AI Leak
The U.S. lawmakers are increasing scrutiny of artificial intelligence firms after reports of source code leaks at Anthropic, according to Axios. Representative Josh Gottheimer has asked the company to explain the incident and recent changes to its safety protocols, citing potential national security risks. The report said leaked material linked to Anthropic’s Claude system has raised concerns about misuse and foreign access. Gottheimer warned th…
Anthropic leak reveals Claude Code tracking user frustration and raises new questions about AI privacy
Code that reads your frustration is the least interesting part of the story of this accidental leak from Anthropic. The leak reveals how AI tools are also concealing their own role in the work they help produce
Coverage Details
Bias Distribution
- 45% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium




























