The conflict between AI company Anthropic and the U.S. Department of Defense (DoD) is not just about one contract. It is becoming a litmus test for the entire question of who decides on military artificial intelligence — the technology companies that build it, or the authorities who want to use it.

The Core of the Disagreement

Anthropic's AI assistant Claude is in the spotlight after the company demanded contractual clauses prohibiting the use of the model in two specific scenarios: mass surveillance of American citizens and autonomous weapon systems lacking what the company calls «context-appropriate human judgment and control».

The Pentagon, for its part, rejects the demands as overly intrusive. The Department of Defense argues that such conditions interfere with operational realities and legal complexities associated with military missions, according to TechCrunch.

«Frontier AI systems are simply not reliable enough to power fully autonomous weapons» — Dario Amodei, Anthropic CEO
Anthropic Denies Pentagon Use of AI for Killer Robots

What Does the Pentagon Say?

The Department of Defense emphasizes that legal and policy frameworks already exist that prohibit precisely the things Anthropic is concerned about. Pentagon spokesman Sean Parnell has, according to TechCrunch, stated that the department has no interest in using AI for mass surveillance or for developing autonomous weapons that operate without human involvement — and that this is already prohibited by law and internal guidelines.

DoD Directive 3000.09, last updated in January 2023, regulates autonomy in weapon systems and defines categories based on the human operator's role in target selection and engagement. Chief of Research and Engineering Emil Michael formulated it this way: «At some level, you have to trust the military to do the right thing.»

Anthropic Denies Pentagon Use of AI for Killer Robots

Broad Support for Anthropic

Anthropic's position has received support from unexpected quarters. OpenAI CEO Sam Altman has backed the viewpoint, and nearly 500 employees at OpenAI and Google signed an open letter supporting Anthropic's demand for stronger contractual guarantees, according to TechCrunch's review of the case.

This is not a marginal stance in the industry. It reflects a growing concern among AI developers that their models could be used in ways that violate fundamental ethical principles.

Nearly 500 employees at OpenAI and Google have joined forces to support Anthropic's demands against the Pentagon

Experts: «Responsibility Gap» is the Real Problem

Behind the commercial conflict lie deeper ethical questions that philosophers and international humanitarian law experts have discussed for years. Philosopher Robert Sparrow has coined the term «responsibility gap» — situations where autonomous systems cause harm without identifiable moral agents to hold accountable.

The «Stop Killer Robots» campaign points to two central risks: that machines cannot recognize humans as humans — what they call «digital dehumanization» — and that algorithmic bias can amplify existing inequalities if embedded in weapon systems.

The Vatican has engaged in the debate. Monsignor Pacho argues that removing the human element from lethal decisions eliminates «the unique human capacity for moral judgment» and «dangerously lowers the threshold for conflict». Pope Francis has explicitly called for a ban on autonomous weapons.

USA Opposes International Regulations

On the international stage, the U.S. has adopted an uncompromising position. In the UN Convention on Conventional Weapons (CCW), where autonomous weapon systems have been a central topic, the U.S. has, according to TechCrunch's source material, been «vocally opposed to establishing such measures, claiming they are barriers to innovation».

This resistance to international binding norms makes the internal struggle between tech companies and the Pentagon all the more important: In the absence of international law, private contractual clauses may be the closest thing to actual regulation.

500
Employees at OpenAI and Google who supported Anthropic
2023
Last update of DoD Directive 3000.09

Who Decides?

The dispute between Anthropic and the Pentagon is symptomatic of a larger power play: AI companies hold the technology, but states hold the contracts and legal authority. Neither party has yet yielded, and the outcome could set a precedent for how military AI is regulated — or not regulated — in the years to come.