Most people assumed it was an April Fools joke. It was not.
Anthropic accidentally shipped the entire source code of Claude Code to the public npm registry. 512,000 lines of TypeScript. 1,906 files. 44 hidden feature flags. All sitting inside a 59.8MB debug file that nobody remembered to exclude from the build.
By the time Anthropic pulled the package, the code had already been mirrored across GitHub, forked 82,000 times, and rebuilt into a community version that hit 50,000 stars in two hours.

When did this issue happen?
It all started with a twitter post when Chaofan Shou on X discovered on March 31st, 2026 something that Anthropic probably didn’t want the world to see: the entire source code of Claude Code, Anthropic’s official AI coding CLI, was sitting in plain sight on the npm registry via a sourcemap file bundled into the published package.

What was the issue?
On March 31, 2026, Anthropic pushed version 2.1.88 of the @anthropic-ai/claude-code package to the public npm registry.
Hidden inside it was a 59.8 MB JavaScript source map file — a debugging artifact that pointed directly to their Cloudflare R2 storage bucket and exposed the full TypeScript source code.
When you publish a JavaScript/TypeScript package to npm, the build process often generates .map files. These are meant to help with debugging — they map minified production code back to the original source so stack traces actually make sense.

But here’s the catch: source maps don’t just map code — they can contain the entire original source code.
A typical .map file looks something like this:
{
"version": 3,
"sources": ["../src/main.tsx", "../src/tools/BashTool.ts", "…"],
"sourcesContent": ["// full original source code", "…"],
"mappings": "AAAA,SAAS,OAAO…"
}That sourcesContent array? That’s everything.
Every file. Every comment. Every internal constant. Every system prompt. All embedded as plain text inside a JSON file that npm will happily serve to anyone who downloads the package or runs npm pack.
So a debug file that should never have been public effectively made Anthropic’s internal architecture accessible within hours.
The root cause was likely very simple:
- missing *.map in .npmignore, or
- not properly configuring the files field in package.json, or
- leaving source map generation enabled in production builds
With tools like Bun (which Claude Code uses), source maps are generated by default unless explicitly turned off. So unless you actively exclude them, they can slip into your published package.
There was reportedly an internal system called “Undercover Mode,” designed to prevent the AI from leaking sensitive internal details like codenames or prompts.
They built an entire subsystem to avoid accidental leaks in complex scenarios ,but ended up exposing everything through a .map file.
Just a debug artifact that shouldn’t have been there which enabled Chaofan Shou noticied and wrote short script, pulled src.zip directly from Anthropic’s R2 bucket, and posted the download link on X. It was just a config gap that anyone with curiosity could have found.
Root Cause: A missing *.map entry in .npmignore — or an unconfigured files field in package.json. One line of config that would have hidden everything.
What All Leaked?
The codebase was a product roadmap Anthropic never intended to publish — feature flags, unreleased models, internal benchmarks, and a Tamagotchi pet.
BUDDY — The Secret Pet Hidden Inside Claude Code
A fully built virtual companion system , think of it like this — while you are coding, a small virtual companion sits next to your input box, watches what you are doing, and occasionally comments in a speech bubble. It has a name,a species , a personality, and responds when you talk to it directly.
Your species — basically what your buddy looks like — isn’t random. It’s generated from your user ID. Same account, always the same creature.
18 species total, hidden in the code using character arrays so they wouldn’t show up in text searches. Developers decoded them anyway:
- Common (60%) — Pebblecrab, Dustbunny, Mossfrog
- Uncommon (25%) — Cloudferret, Gustowl
- Rare (10%) — Crystaldrake, Lavapup
- Epic (4%) — Stormwyrm, Voidcat
- Legendary (1%) — Cosmoshale, Nebulynx
Five stats per buddy: DEBUGGING · PATIENCE · CHAOS · WISDOM · SNARK
Claude writes your buddy’s personality on first hatch. Eye styles, animation frames, and yes — hats. Some locked behind rarer species.
Address it by name and it responds. It’s not decoration. It’s a watcher.
It’s a light feature. Anthropic was thinking about what it feels like to use Claude Code every single day for months. A familiar little creature — always the same, always yours — makes a tool feel more like a workspace you actually want to be in.
KAIROS
A persistent background assistant that runs on a scheduled <tick> interval, watches your environment, and acts without being prompted. It maintains append-only daily logs of everything it observes. It can trigger proactive actions based on those observations — not just respond to them. At night, it runs an autoDream process to consolidate and prune its own memory.
It also gets tools that regular Claude Code does not have:
- SendUserFile — push files and summaries directly to you
- PushNotification — send alerts to your phone
- SubscribePR — monitor pull request activity in the background
Any action that would block your workflow for more than 15 seconds gets automatically deferred. This is Claude trying to be useful without getting in the way.
None of this exists in the public build. KAIROS is gated behind an internal feature flag that is not present in the npm package you download today. There is no way to enable it as a user right now.
But the architecture is real, and it describes something significantly different from how Claude Code currently works. Currently Claude Code is reactive — you give it a task, it runs. KAIROS would be proactive — a background layer that builds context about your work over time, then acts on it without being asked.
ULTRAPLAN — 30 Minutes of Deep Thinking, Running in the Cloud
Most planning in Claude Code happens locally, instantly. but ULTRAPLAN does opposite where Claude Code offloads a complex planning task entirely to a remote session running Opus 4.6 in the cloud (remote Cloud Container Runtime (CCR) session), gives it up to 30 minutes to think, and lets you approve the result from your browser before anything executes.
The 30-minute budget is the telling detail. This is not autocomplete. This is Anthropic acknowledging that some tasks need sustained, uninterrupted reasoning — the kind that does not fit inside a single prompt response.
The browser approval step is equally interesting. You are not just delegating to the AI. You stay in the loop. The plan comes back to you before a single line of code changes.
For long, complex tasks where getting the plan wrong is expensive — a large refactor, a system migration, an architectural decision — this is a meaningful capability.
Coordinator Mode — Claude Code’s Multi-Agent System
Most people use Claude Code as a single assistant. Coordinator Mode turns it into a team.
Hidden inside the leaked source is a full multi-agent orchestration system. When enabled via CLAUDE_CODE_COORDINATOR_MODE=1, Claude Code transforms from a single agent into a coordinator that spawns multiple worker agents, runs them in parallel, and reconciles their results.
Workers communicate via <task-notification> XML messages. There is a shared scratchpad directory so workers do not repeat what another has already figured out.
This is not multi-threading. It is closer to a small team of agents working in parallel, each handling its own subtask, with one coordinator routing work and reading actual results.
Undercover Mode
The leaked source contains a check for USER_TYPE === 'ant' — a flag that identifies Anthropic employees. When that flag is true and the user is working in a public repository, the system automatically enters what the code calls Undercover Mode.
Here is what the injected system prompt actually says:
“You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”
When Undercover Mode is active:
- Claude is instructed to never mention it is an AI
- Co-Authored-By lines — the commit metadata that identifies AI involvement — are stripped from git output
- Internal codenames like Capybara, Tengu are hidden from responses
- Internal Slack channels, project names, and tooling references are scrubbed
- There is no force-off switch in the user-facing interface
The activation logic is automatic. If the repository remote does not match an internal allowlist, Undercover Mode switches on. No manual toggle. No opt-out.
This confirms a few things the leaked code makes hard to ignore. Anthropic employees are actively using Claude Code to contribute to public open source repositories. The AI is instructed to hide that it is an AI. And the system was designed to do this quietly, without leaving traces in commit history. None of this exists in the public build.
AutoDream — Dreams at Night
Claude Code has a background memory consolidation system called autoDream — a forked subagent that runs while you are away, goes through everything observed during your sessions, and organises it into structured memory files for the next time you open Claude Code.
It is not always running. It has a three-gate trigger:
- At least 24 hours since the last dream
- At least 5 sessions have passed since the last dream
- A consolidation lock must be acquired — so two dreams never run at the same time
All three must pass. This prevents both over-dreaming and under-dreaming.
When it runs, it follows four phases:
- Orient — reads the memory directory and skims existing files to understand what is already stored
- Gather — finds new information worth keeping, pulling from daily logs and session transcripts
- Consolidate — writes or updates memory files, converts relative dates to absolute, deletes contradicted facts
- Prune — keeps memory under 200 lines and 25KB, removes stale references, resolves contradictions
The dream subagent gets read-only access. It can look at your project but cannot modify anything. It is purely a memory pass.
Note : KAIROS watches and acts during the day. autoDream consolidates and organises at night. Together they describe something that behaves less like a tool you open and more like a system that is always learning about how you work — even when you are not there.
Architecture
The leaked source revealed that Claude Code is not a simple wrapper around an AI model — it is a six-layer production system covering a self-healing query engine, a plugin-style tool system, a four-layer persistent memory architecture, multi-agent orchestration, compile-time feature elimination, and telemetry — all built to make the model reliable, not just capable.

User interface — how you talk to Claude Code. VS Code, terminal, browser, or voice.
Query engine — the brain. 46,000 lines that handle errors silently so you never see a crash.
Tool system — 40+ actions Claude can take. Read files, run bash commands, edit code.
Memory — Claude remembers your work across sessions. Your notes, its notes, and a nightly cleanup.
Coordinator mode — Claude splits big tasks across multiple agents working at the same time.
Feature flags — controls what you see vs what Anthropic keeps internal.
Telemetry — tracks how you use it. When you get frustrated, when you hit continue, when things fail.
Clarification from Anthropic
“Earlier today, a Claude Code release included some internal source code.No sensitive customer data or credentials were involved or exposed.”
“This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
Summary
- Only the source code of @anthropic-ai/claude-code package was leaked not any user data, conversation history, or model weights.
- A real look at how a production AI agent is actually built.
- Architecture patterns for memory, tool systems, and multi-agent orchestration.
- A reference for problems every AI team is currently trying to solve.
- Community rewrites and ports appeared within hours.
References:
To know the complete architecture behind Claude Code , refer to “Everyone Analyzed Claude Code’s Features. Nobody Analyzed Its Architecture”.
The complete break down of Claude code analysis in Github : Kuberwastaken/claude-code
Claw-code : Claw Code is an open-source AI coding agent framework — a clean-room rewrite of the Claude Code agent harness architecture, built from scratch in Python and Rust.
Thank you for reading this article. Please provide your valuable suggestions/ feedback.
- Clap and Share if you liked the content.
- 📰 Read more content on my Medium (on Java Developer interview questions)
- 🔔 Follow me on: LinkedIn
- Reach out to me for resume and interview preparation . Contact me on topmate.
Please find my other helpful articles on Java Developer interview questions.
Following are some of the frequently asked Java 8 Interview Questions
Frequently Asked Java Programs
Dear Readers, these are the commonly asked Java programs to check your ability in writing the logic.
SpringBoot Interview Questions | Medium
Rest and Microservices Interview Questions| Medium
Must-know-coding-programs-for-faang-companies| Medium
The Claude Code Leak was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.