Secretary of Defense Pete Hegseth has sent a clear signal to AI company Anthropic: Grant the military unrestricted access to AI technology, or watch the contract disappear. According to TechCrunch, the deadline is set for Friday, February 28, 2026 – and Anthropic currently has no intention of yielding.

A $200 million contract hangs by a thread

In July 2025, Anthropic was awarded a $200 million contract with the U.S. Department of Defense (DoD). The mission was to develop advanced AI capabilities to meet critical national security needs. The company also gained an exclusive position: the Claude model was the only large language model approved for use on the DoD's classified networks.

Now that position is under severe pressure.

$200M
Contract value at stake
Feb. 28, 2026
Anthropic's deadline from Pentagon

According to TechCrunch, Hegseth reportedly told Anthropic CEO Dario Amodei directly that the Pentagon «will not use AI models that do not allow you to wage war.» The Department of Defense is now considering labeling Anthropic as a «supply chain risk» – a classification that would effectively force other defense contractors to cut ties with the company.

Pentagon gives Anthropic ultimatum: Remove AI safety rules or lose $200M contract

What is Anthropic refusing to do?

Anthropic's safety rules – the so-called guardrails – are designed to prevent the Claude model from being used to control fully autonomous weapon systems or conduct mass surveillance of the domestic population. These are principles the company is unwilling to abandon, even under pressure from the nation's top military leadership.

Anthropic prioritizes AI ethics over one of the most lucrative government contracts in the industry

This is not the first time tensions between commercial AI companies' ethical guidelines and military usage needs have surfaced, but the conflict between the Pentagon and Anthropic is likely the most open and formalized known to date.

Pentagon gives Anthropic ultimatum: Remove AI safety rules or lose $200M contract

Competitors are more flexible

While Anthropic stands firm, the situation looks different among its competitors. According to research sources cited by TechCrunch, OpenAI, Google, and Elon Musk's xAI have all received similar $200 million contracts – and they appear more willing to adapt to Pentagon's demands.

The fact that xAI is already approved for classified networks has, according to TechCrunch, broken Anthropic's previously exclusive position in this area.

A matter of principle with major consequences

The dispute is about more than a single contract. It exposes a fundamental question that the AI industry has not yet answered: Who determines the limits of what artificial intelligence can be used for in warfare?

The Pentagon's position is clear – the military wants AI tools that can be used for «all lawful purposes» without developers imposing obstacles. Anthropic, on the other hand, believes that precisely such obstacles are crucial for the responsible development of the technology.

How this ends by Friday, February 28, will not only determine the fate of Anthropic's defense contract – it could also set a precedent for how the rest of the AI industry approaches military clients in the future.

Sources: TechCrunch (February 24, 2026)