The dispute between leading AI companies and the U.S. military has escalated. According to Stratechery, Anthropic is now in a direct confrontation with what is referred to as the «Department of War» — a conflict not just about one contract, but about the entire premise of who should control the boundaries of military artificial intelligence.
Anthropic Puts Its Foot Down
Anthropic CEO Dario Amodei has made it crystal clear that the company will not allow Claude to be used for mass surveillance of civilians or for fully autonomous weapon systems — that is, systems that identify and engage targets without human control. Amodei has justified this by stating that interface models are simply not reliable enough for such decisions.
According to research material, the Pentagon sought unlimited access to Anthropic's technology for «all lawful purposes,» which the company declined. The consequence was dramatic: the Trump administration blacklisted Anthropic as a «supply chain threat» and ordered federal agencies to cease using the company's products. Despite threats of being excluded from defense programs and reported attempts to use the Defense Production Act as leverage, Amodei has held firm on his position.
No amount of pressure from the Pentagon will change our stance on mass surveillance or fully autonomous weapons — Dario Amodei, CEO Anthropic

OpenAI: Contractually Defined Boundaries, But With Loopholes?
OpenAI chose a different path. The company entered into an agreement with the Pentagon to make its advanced models available in classified environments, but with three explicit prohibitions: no use for mass surveillance, no control of autonomous weapons, and no use in high-risk automated decision-making systems such as so-called «social credit» systems.
Sam Altman has stated that the Department of Defense agreed to these principles and that they are enshrined in the contract. Technically, the solution is cloud-based, and OpenAI retains control over its security stack.
Critics, however, point out that the phrase «all lawful purposes» could give the military increased leeway if relevant legislation changes. This flexibility clearly differs from Anthropic's approach, where the boundaries are absolute and not dependent on context.

Google DeepMind Does a 180-Degree Turn
Where Anthropic and OpenAI at least maintain a set of ethical frameworks, Google has taken a far more radical turn. In 2025, the company's AI principles were quietly revised, and the explicit prohibition against developing AI for weapons and surveillance was removed. This is a marked break from the principles established in 2018, following internal rebellion against Project Maven.
DeepMind CEO Demis Hassabis has defended the change in course by stating that democracies should lead in AI development based on values such as freedom and human rights — and that «the benefits must significantly outweigh the risk of harm.» Nevertheless, research material shows that nearly 200 DeepMind employees in 2024 signed a letter asking the company to terminate military contracts, concerned that it undermines the company's credibility as an ethical AI actor.
An Industry Without a Common Standard
The Stratechery analysis points out that this conflict reveals a structural problem: the AI industry lacks a common, binding standard for military use. Each company defines its own red lines — and these can change under commercial or political pressure.
At the same time, Anthropic's position is controversial for other reasons. Stratechery suggests that while the company's concerns may be legitimate, it is problematic for a private company to deny a democratically elected government access to technology based on its own ethical premises. It is a balancing act without easy answers — and a debate that is unlikely to be over.
The question that remains is who ultimately should have the final say: the states that finance and regulate technology development, or the companies that actually build it.
