One of the most fundamental clashes between the private AI industry and American military power is about to take shape in a courtroom. According to MIT Technology Review, AI company Anthropic plans to sue the U.S. Department of Defense (Pentagon) in a conflict concerning what kind of use a civilian technology company can actually deny the state.
Clear Prohibitions in Anthropic's Ethical Framework
Anthropic's flagship product, Claude, is governed by a detailed ethical framework called "Claude's Constitution" — an 84-page, 23,000-word document in its latest version from January 2026. The document not only describes rules but also the reasoning behind them, and according to the company, represents a transition from rule-based to reasoning-based AI alignment.
Central to this framework are absolute prohibitions: Claude must not be used to design weapons of mass destruction, conduct large-scale cyberattacks, generate child abuse imagery — or undermine human oversight of AI systems.
Anthropic has also explicitly stated that the company will not allow its technology to be used by public authorities in autonomous weapon systems or for mass surveillance. It is this point that now appears to be at the center of the conflict with the Pentagon.

The Department of Defense Has Its Own AI Ethics
It is not the case that the Pentagon operates without ethical guidelines for AI. The Department of Defense adopted five core principles for military AI use as early as February 2020: systems must be responsible, equitable, traceable, reliable, and governable. These principles are rooted in, among other things, the U.S. Constitution, international humanitarian law, and international treaties.
But where Anthropic's ethics are developed for general civilian utility, the DoD's strategy is adapted to high-risk military situations where "reliability" can mean the difference between life and death. This fundamental difference in context and purpose means that the two frameworks cannot be easily reconciled.
A private AI company's attempt to deny the state access to technology raises questions that do not have established legal answers

Security Policy Under Pressure
While Anthropic prepares a lawsuit to protect its ethical boundaries externally, recent reports show that the company has internally adjusted its so-called Responsible Scaling Policy (RSP) — its voluntary framework for managing catastrophic risks in advanced AI systems. The third version of the RSP was released in February 2026.
Previously, Anthropic committed not to train new AI systems unless adequate safety measures were in place. The updated policy now allows competitors' actions and market pressure to influence this principle — which has triggered criticism that the company is weakening its own safety commitments under economic and political pressure.
This creates a paradox: The company fights in the public sphere for stricter limits on military AI use, while internally moving towards more flexible safety standards.
Broader Context: The Most Important AI Questions of Our Time
MIT Technology Review describes the case as part of what the editorial board calls the ten most important issues in AI right now. This signals that the conflict between commercial AI actors and state security interests is not an isolated incident, but a structural tension that will shape the industry in the years to come.
Concrete details about the legal basis for Anthropic's announced lawsuit, and what specific demands the Pentagon has allegedly made, have not yet been publicly disclosed. Claims about the lawsuit plans stem from MIT Technology Review's coverage and have not been independently confirmed by 24AI.
