An ethical breach between the U.S. Department of Defense (DoD) and AI company Anthropic has resulted in Anthropic being excluded from the defense contract market – and OpenAI stepping in as a replacement, according to TechCrunch.

What was the disagreement about?

The conflict revolves around the extent to which the military should control commercial AI models. According to the research source, the Department of Defense requires companies entering into contracts with the DoD to grant the government an «irrevocable license» to use the models for all lawful purposes – without reservation.

For Anthropic, two points were absolutely non-negotiable: mass surveillance of American citizens and fully autonomous weapon systems. The company's founder and CEO Dario Amodei stated that the company «cannot in good conscience comply» with the Pentagon's demands.

"We cannot in good conscience comply" with the Pentagon's demands, says Anthropic CEO Dario Amodei.

The result was that the Pentagon formally classified Anthropic as a «supply chain failure» – a designation that effectively bars the company from defense contracts.

Pentagon labels Anthropic a security risk after drone and surveillance dispute

Two Companies, Two Choices

Where Anthropic withdrew, OpenAI stepped in. The company accepted the $200 million contract that Anthropic declined. The user response was immediate and brutal: the number of ChatGPT uninstallations increased by 295 percent.

200 mill. USD
Contract value offered by Pentagon
295 %
Increase in ChatGPT uninstallations after OpenAI's signing

The two companies thus represent two very different strategies for handling government contracts: Anthropic prioritizes its own ethical boundaries, while OpenAI accepts the terms and takes the commercial gain – at an obvious reputational cost.

Pentagon labels Anthropic a security risk after drone and surveillance dispute

Pentagon's and Anthropic's Ethical Frameworks Are Fundamentally Different

Anthropic's approach is particularly known for its «Constitutional AI» method, where the model – Claude – operates according to a foundational document with clearly defined values and hard-coded prohibitions. According to the research source, these prohibitions include assistance with weapons of mass destruction and attempts to seize unlawful power.

In February 2026, Anthropic updated its Responsible Scaling Policy (RSP), but not without controversy: the company removed a previous promise not to train an AI model unless sufficient safety measures could be guaranteed beforehand.

What Does This Mean for the AI Industry?

The dispute between the Pentagon and Anthropic is the first major public confrontation between an AI actor's ethical boundaries and the state's demand for unlimited control.

The case raises questions that will shape the industry for a long time: Can commercial AI companies maintain their own ethical frameworks when the state exerts pressure? And what price are users and investors willing to pay for compromises?

The reaction to OpenAI's signing – in the form of mass uninstallations – suggests that the trust capital of AI companies is more closely linked to ethical credibility than many might have thought. Anthropic, for its part, has paid the price in lost contracts, but cites CEO Amodei's statements as a deliberate positioning.

It remains to be seen whether the Pentagon model – where the state demands an «irrevocable license» for all lawful purposes – will become the norm for future AI contracts, or whether resistance from actors like Anthropic will force a more nuanced framework.