A thread on r/singularity is currently exploding, and not without reason. In an interview with CBS, Anthropic CEO Dario Amodei confirmed something many have long suspected: the military does not use the same Claude that you and I chat with. According to Amodei himself, the military version is one to two generations ahead of what is available to the public, and has already «revolutionized and radically accelerated» what the American military can do. He added that this is just the beginning of what they have rolled out.

That's quite a lot to take in.

We are talking about an AI capability gap between the military and the rest of us that could be as significant as a generational shift.

But it doesn't stop there. The story has already taken a dramatic turn. Anthropic secured a $200 million contract with the Pentagon in July 2025, and Claude is reportedly the only AI model approved for classified DoD systems at the highest level (Impact Level 6). So far, so good.

The problem is that the Pentagon wanted more. The Department of Defense pushed to remove Anthropic's two non-negotiable ethical lines: no fully autonomous weapons, no mass surveillance of its own citizens. When Anthropic refused to yield, Secretary of Defense Pete Hegseth set a hard deadline — February 27, 2026. When the deadline passed without Anthropic bending, the company was officially designated the next day as a «supply chain risk to national security». This effectively means military contractors are prohibited from commercial activity with Anthropic.

Amodei has called this «retaliatory and punitive» — that is, vengeful and punitive. And it's hard not to see his point.

This is interesting for several reasons. Firstly, it confirms that a real AI capability gap already exists between what the military has access to and what the public sees. Secondly, the conflict shows that AI companies' internal ethical policies are now confronting direct state power — and that there are real consequences for resisting. Thirdly, the Anthropic case is an early example of what could become a much larger debate: Who truly decides what AI models can be used for when national security is on the table?

Remember, these are early signals from community discussions and preliminary reports — but the signals point in one clear direction: the battle for control over military AI is no longer theoretical. It is happening now.