When Anthropic was ousted by the U.S. Department of Defense in February 2026, it didn't take many hours for another AI giant to step in. On February 28, OpenAI announced a formal agreement with the Pentagon to make the company's AI models available in classified settings – a move that raises questions about political timing and what truly distinguishes the two companies' approaches.
From Prohibition to Formality
OpenAI has undergone a significant policy shift regarding the military use of its technologies. Before January 2024, the company's guidelines explicitly prohibited use related to «military and warfare» as well as weapon development. In January 2024, OpenAI removed these specific phrasings and replaced them with a broader instruction not to «harm oneself or others,» according to the research basis on which MIT Technology Review grounds its coverage.
The company presented the change as a cleanup effort to make the document «clearer and more readable,» but the effect was that the door to defense contracts was opened.
Sam Altman himself acknowledged that the negotiations were «definitely rushed»

What the Agreement Contains
OpenAI's agreement with the Pentagon includes what the company describes as robust security mechanisms. According to MIT Technology Review, «red lines» have been established prohibiting the use of the technology for:
Sam Altman maintained that OpenAI's approach is «multi-layered» and more comprehensive than what Anthropic proposed. He suggested that Anthropic sought more operational control than the Pentagon was willing to grant.

The Mirror Image of What Anthropic Demanded
Here, the situation becomes paradoxical: The limits OpenAI has now gained acceptance for are essentially the same ones Anthropic was penalized for insisting upon.
Anthropic had long prohibited mass surveillance of American citizens and fully autonomous weapons that fire without human involvement. When the company demanded that these limits be explicitly enshrined in the contract, the Pentagon declared Anthropic a «national security supply chain risk.» President Donald Trump gave all federal agencies six months to phase out Anthropic technology.
Anthropic's CEO Dario Amodei publicly stated that he could not «in good conscience» allow the company's technology to be used for mass surveillance or to independently control autonomous weapon systems.
The difference appears to lie in implementation, not in principles. Where Anthropic sought legally binding contractual language, OpenAI relies on a combination of contractual protection, technical safeguards, and personnel oversight.
Expert Skepticism
Several experts are critical of the development. Heidy Khlaaf, Technical Director at Trail of Bits, warns that the transition from explicit prohibitions to a more discretionary and legality-based approach could have serious consequences for AI security – with risks of biases and increased harm in military contexts, according to MIT Technology Review.
Jimena Sofía Viveros Álvarez, a member of the UN's High-Level Advisory Body on AI, warns against allowing AI systems to play a role in military targeting, regardless of the security layers in place.
Furthermore, the timing of OpenAI's announcement – just hours after Anthropic's exclusion – has led experts to question whether there was political coordination in the Pentagon's choice of AI partners.
What This Means Going Forward
The case illustrates a growing tension in the AI industry: How to balance commercial interests against ethical obligations in the face of government pressure? The OpenAI agreement shows that it is possible to reach a compromise with the Pentagon – but it also raises questions about whether such compromises are more a result of negotiation tactics than of genuine principled differences between the companies.
That Altman himself called the process «definitely rushed» gives little reason to assume that all implications have been thoroughly considered.
