A Hacker News thread exploding right now — over 950 comments and 1000+ points — is about what could quickly become one of the most consequential accidental leaks in AI history. Someone discovered that Anthropic mistakenly included a gigantic JavaScript source map file in version 2.1.88 of @anthropic-ai/claude-code on npm. The file was intended for internal debugging. It wasn't.

The discovery was first shared on X by Chaofan Shou, an intern at Solayer Labs, at 04:23 ET. Within hours, the code had spread to thousands of GitHub repos. Anthropic has sent copyright takedowns to over 8000 repositories, but people have already started rewriting the code into other languages to circumvent them. The cat is out of the bag.

So what did we actually learn? Some of it is technically interesting, but some is genuinely startling:

KAIROS Daemon Mode is internally described as the biggest unpublished feature, with over 150 references in the codebase. It is an always-on background agent that acts proactively, logs daily, and has a "Dream mode" to consolidate observations and manage memory without user interaction. This is not a chatbot. This is something closer to an autonomous work tool.

Undercover Mode is something else. This instruction tells Claude Code to remove all traces of Anthropic internals when used in public repos — not to mention internal codenames, not to acknowledge AI origin in commits. It's not necessarily malicious, but it's the kind of thing that will ignite ethical discussions about AI transparency.

The code contains deliberate injection of "fake tools" and poisoned training data — aimed at competitors attempting to learn from Claude Code's API traffic.

Anthropic's official line is that no sensitive user data, model weights, or credentials were exposed, and that this was human error in the release process — not a breach. That is likely true. But it helps little when strategic competitive information, internal codenames ("Capybara" = Claude 4.6, "Fennec" = Opus 4.6), and a detailed architecture guide are now publicly available to OpenAI, Google, and everyone else.

This is also Anthropic's second leak in a short time, which is starting to raise questions about internal security practices — not just as a security problem, but as a matter of trust.

We are in the early stages here. Community sources like Reddit and HN are driving this forward right now. Mainstream tech media is starting to pick it up, but in-depth analyses are still lacking. Stay tuned.