Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
#ClaudeCode500KCodeLeak
1 April 2026, Anthropic the AI safety company behind the Claude family of models accidentally exposed the near-complete source code of its flagship developer product, Claude Code. The incident was not the result of a cyberattack, a malicious insider, or a sophisticated breach. It was a packaging mistake. A single misplaced file shipped to a public registry, and within hours, the AI world was poring over half a million lines of proprietary code.
How It Happened
When Anthropic published version 2.1.88 of the @anthropic-ai/claude-code package to the public npm registry, the build process inadvertently bundled a 59.8 megabyte JavaScript source map file, specifically cli.js.map, alongside the rest of the package. Source map files are debugging artifacts. They exist to link bundled, minified, or compiled code back to the human-readable original source. They are strictly internal tools and are never meant to ship to end users or appear on public package registries.
The problem is that this particular source map file was not pointing to obfuscated or compiled code — it referenced unobfuscated TypeScript source files. Anyone who downloaded the npm package and knew how to work with source maps could reconstruct the original TypeScript codebase in a readable, fully navigable form. That is exactly what happened.
A security researcher affiliated with Solayer Labs, posting under the handle @Fried_rice on X, was reportedly the first to identify and unpack the exposure. Within a short window, a GitHub repository appeared hosting what was presented as the reconstructed source — around 512,000 lines of TypeScript code spread across approximately 1,900 files. The repository was forked rapidly before Anthropic could respond.
What Was Actually Inside
The leak did not expose Claude's underlying model weights, training data, or any customer credentials. Anthropic confirmed this directly. But what it did expose was arguably the next most sensitive thing: a detailed view of how Claude Code was architected, how it processes user intent, how it communicates with models, and what was being built behind the scenes.
Several findings quickly circulated among developers and researchers analyzing the code.
References to unreleased models were found throughout the codebase. The names Opus 4.7 and Sonnet 4.8 appeared, as did internal codenames Capybara and Mythos — the latter having already been partially revealed days earlier through a separate incident in which unpublished blog posts and documentation were inadvertently left accessible in a public-facing data cache. The Mythos and Capybara names appeared to refer to the same upcoming model, and the leaked code provided further confirmation that the launch was being actively prepared.
A feature described internally as ultraplan was discovered in the code. This appears to be an asynchronous multi-agent mode designed for longer research sessions, with estimated completion windows of between ten and thirty minutes. The implication is that Claude Code was being built to coordinate multiple agent instances over extended tasks, a significant architectural capability that had not been publicly disclosed.
There were also references to something called Kairos, described in the code as an always-on proactive agent a background process that could initiate actions without explicit user prompting. Additionally, a feature called autoDream appeared to be a memory consolidation system, likely intended to help the agent retain context or summarize session history automatically.
Perhaps the most unexpected discovery was something internally named the Buddy System, which appeared to model a form of AI companion behavior, complete with attributes tracking chaos and snark levels. Whether this was a serious product feature or an internal experiment is unclear, but it drew significant attention online.
On the safety and telemetry side, the code revealed that Claude Code logs specific user expressions including profane phrases like wtf and ffs and flags them as is_negative in its analytics pipeline. More structurally significant was the finding that Claude Code's cybersecurity guardrails are implemented as plain text prompt strings rather than hard-coded logic, meaning they are in principle replaceable or modifiable without deep system changes. The codebase also contained more than 44 feature flags, most of them hidden from public documentation.
The code also showed that over 120 developer tool names were hardcoded for special treatment within the product, suggesting that Claude Code had been deliberately tuned to recognize and interact differently with specific integrations.
Community Response and Forks
The developer community moved fast. Within hours of the repository going public, multiple derivative projects appeared.
One fork, named OpenCode, was designed to strip out the Claude-specific model dependencies and replace them with a modular backend capable of routing requests to any large language model, including GPT, Llama, and others. The intent was to use the architectural patterns from Claude Code while making the system model-agnostic.
Another fork, named free-code, went further. It removed telemetry, disabled safety layers, and enabled experimental features. To avoid DMCA takedowns, it was distributed via IPFS rather than any centralized hosting platform.
Both forks raised immediate legal questions. The code is proprietary intellectual property. Redistribution and derivative use without a license constitutes copyright infringement under most jurisdictions. Some community members pointed out that even analyzing the code in detail could create legal risk depending on circumstance. Despite this, the code continued to spread rapidly.
Context and Prior Incidents
The timing could not have been worse for Anthropic. Just days before the source map incident, the company had suffered a separate exposure when unpublished documentation and blog posts about the Mythos model were inadvertently left accessible in a publicly crawlable data cache. That incident was embarrassing. This one was significantly more damaging in terms of intellectual property.
Anthropic is currently operating at a reported annualized revenue run-rate of 19 billion dollars as of early 2026, and Claude Code specifically has been cited as generating an estimated 2.5 billion dollars in annualized recurring revenue — a figure that reportedly more than doubled over the first few months of the year. The product is central to the company's commercial trajectory.
The irony noted by several commentators is that Anthropic's own head of Claude Code had publicly stated in late 2025 that 100 percent of his recent contributions to the product had been written by Claude Code itself. The suggestion emerging from the community was that the packaging error responsible for including the source map file may have been the result of automated build processes operating without sufficient human review — in other words, a product partly shaped by AI may have been undone by the same automation. This is speculative and unconfirmed, but the narrative landed hard.
An AI-assisted code review of the leaked codebase, reportedly run through both GPT-5.4 and a high-tier Claude model, returned a score of 6.5 out of 10, with the characterization being something along the lines of performance-aware spaghetti — meaning the code showed signs of optimization under pressure and iterative patching rather than clean foundational design.
Anthropic's Response
An Anthropic spokesperson confirmed the incident with a brief statement: earlier today, a Claude Code release included some internal source code. The company stated that no customer data or credentials were involved or exposed. The compromised version of the npm package was pulled quickly. When asked whether the company intended to pursue legal action against those who had published or forked the exposed repositories, Anthropic declined to comment beyond its initial statement.
As of early April 2026, public release notes still showed version 2.1.88 as the most recent Claude Code release, and the npm distribution pathway was listed in documentation as a deprecated compatibility path, suggesting the company was already in the process of migrating to other distribution mechanisms.
Broader Implications
This incident sits at an intersection of several ongoing conversations in the AI industry.
First, it highlights the risks of shipping AI developer tooling through public package registries without hardened build pipelines. The npm ecosystem in particular has a long history of accidental exposures, but the scale and sensitivity of this particular leak is unusual.
Second, it raises questions about how AI companies balance speed with security. Claude Code had been growing at exceptional velocity, and that velocity appears to have contributed to a build process that did not catch a debug artifact being included in a public release.
Third, the presence of hidden features, replaceable safety strings, and extensive telemetry in the leaked code will likely intensify scrutiny of how AI coding tools handle user data and how safety measures are actually implemented at an engineering level rather than a policy level.
Fourth, the emergence of forks designed to remove telemetry and safety layers — even if legally questionable — demonstrates that once proprietary AI tooling is exposed at this level, the practical ability to control downstream use diminishes rapidly.
For competitors, the leak provides a rare and detailed look at one of the most commercially successful AI coding products ever built. For Anthropic, the work now involves not just patching a build process, but assessing what strategic advantage has been permanently transferred to the public domain.