The relationship between artificial intelligence and military power has rarely been brought to such a head as now. The U.S. Department of Defense (DoD) has formally designated the AI company Anthropic as a risk to the country's national security supply chain — a classification that effectively bars the company from lucrative government contracts.

Anthropic's Red Lines

The conflict stems from Anthropic's internal guidelines, which set clear limits on how the company's Claude models can be used. According to available information, the company prohibits, among other things, the use of AI for mass surveillance of populations without consent, and for fully autonomous weapon systems that can identify and attack targets without human intervention.

Anthropic's CEO Dario Amodei has reportedly been clear that the company is not willing to allow Claude to be used in systems where machines make lethal decisions without human control. The company has also sought contractual guarantees that the models will not be used for mass surveillance of American citizens.

“We cannot in good conscience comply with the military's terms. This is about the principle of standing up for what is right” — Dario Amodei, CEO of Anthropic

Pentagon's Demands and Reaction

The Pentagon's stance is that AI providers must make their technology available for “all lawful purposes” without self-imposed restrictions. Under the leadership of Chief Technology Officer Emil Michael, the Department of Defense rejected Anthropic's attempt to set the terms, on the grounds that Congress — not private companies — is the proper authority to regulate military practice.

The department also claims that the use cases Anthropic is concerned about are already covered by existing legislation and internal DoD directives, including Directive 3000.09 on Autonomous Weapon Systems.

The Fear of Sabotage

Perhaps the most dramatic element in the case is the very justification used by the Department of Defense: According to available information, the Pentagon fears that Anthropic could potentially deactivate its technology or alter the models' behavior — either prior to or in the midst of active war operations. This possibility alone is, according to the department, sufficient reason to classify the company as an unacceptable risk.

Pentagon fears that a private AI company could turn off the lights in the middle of an ongoing military operation

It is worth noting that the claim of potential sabotage is the U.S. government's assessment, not a documented intention on Anthropic's part. The company disputes the characterization.

Systems Affected

According to sources, President Trump reportedly instructed all federal agencies to cease using Anthropic's technology. The Department of Defense's internal directive from March 6, 2026, specifies that Claude models are to be removed from systems related to nuclear weapons, ballistic missile defense, and cyber warfare.

200M USD
Original contract value
180 days
Deadline for removal of Anthropic technology

An Industry-Defining Precedent

The case has been closely followed by the AI industry, as it raises fundamental questions about the extent to which private technology companies can set ethical boundaries with government clients. OpenAI, according to available information, has its own similar restrictions against mass surveillance and fully autonomous weapon systems — but has formulated these with reference to existing legislation and DoD policy, which gives authorities greater room for interpretation.

Anthropic's legal complaint against the Pentagon and other federal agencies is now under review. The company claims the designation is illegal retaliation for adhering to its ethical guidelines.

Paradoxically, sources report that Anthropic's Claude models were allegedly already being used by the U.S. military in operations against Iran to identify targets — despite the ongoing political and legal conflict. This claim has not been independently confirmed by 24AI.