A PDF dumped on Hacker News last night has ignited discussions, and with 466 points and over 300 comments, this is something people are truly staying up late to read. The system card for Claude Mythos Preview has been leaked — or rather, quietly published — and it makes for quite wild reading.

In short: Anthropic has built a model they themselves describe as capable of "surpassing all but the most skilled humans" when it comes to finding and exploiting software vulnerabilities. And precisely because it is so good, they plan not to give most people access to it. Ever.

Instead, they have launched something they call Project Glasswing — a major security program where Mythos Preview is used defensively, meaning to find vulnerabilities before malicious actors do. The partner list is absurd: AWS, Apple, Google, Microsoft, Cisco, NVIDIA, JPMorganChase, and a host of others. Over 40 organizations in total.

The results are what truly take people's breath away. The model has found a 27-year-old bug in OpenBSD and a 16-year-old security vulnerability in FFmpeg — flaws that had survived millions of automated tests. It can also chain together multiple Linux-kernel vulnerabilities to escalate itself to full machine control autonomously.

An Anthropic researcher says he has found more bugs in recent weeks than in the rest of his career combined.

Benchmark figures underscore the leap: Mythos Preview scores 93.9% on SWE-bench Verified, compared to 80.8% for Claude Opus 4.6. In cybersecurity-specific tests, the gap is even wider.

Anthropic's own words here are quite serious. They write that if similar capabilities spread to actors without a responsible approach, the consequences for the economy, public safety, and national security could be «severe.» This is why they are running Glasswing now — they themselves call it an «urgent attempt to put capabilities into defense.»

The HN threads are worth following. One discusses the cybersecurity capabilities themselves, another addresses Project Glasswing more broadly. The mood is mixed — impressed, but also some discomfort around the fact that we now have AIs that are too dangerous to share, yet powerful enough to be used to reshape the entire security landscape.

Important caveat: These are still early signals based on community discussions and a publicly disclosed system card. We know little about what has actually been patched, and the coalition partners have not yet commented broadly. But this is definitely something mainstream tech media will pick up in the coming days.