When President Donald Trump signed an order on February 27, 2026, to exclude Anthropic's AI technology from the entire federal government apparatus, it sent shockwaves through the tech industry. Anthropic is not a small startup — at that time, the company had entered into contracts with the Department of Defense, intelligence agencies, and numerous other federal agencies, among others. Now, the future of these agreements is highly uncertain.

A Thriving Business Meets Political Opposition

According to Stratechery, Anthropic's business with enterprise customers is experiencing near-explosive growth. This revenue increase means that what was previously a relatively limited collaboration with the authorities now holds immense strategic and commercial significance. It is no longer about symbolic pilot projects — it concerns entrenched infrastructure in sensitive environments.

Anthropic had, among other things, entered into a two-year prototype agreement with the Department of Defense's Chief Digital and Artificial Intelligence Office (CDAO) worth up to $200 million, according to research sources cited by Stratechery. Additionally, Claude models were already in use within the Pentagon's classified networks through a collaboration with data analytics company Palantir.

200 million USD
Pentagon Contract (ceiling)
10,000+
Researchers with Claude access at Lawrence Livermore
Trump Bans Anthropic from Pentagon – $200 Million Contract at Risk

The Core of the Dispute: Who Decides on AI Use?

The conflict between Anthropic and the Pentagon is fundamentally about who has the final say in how the company's AI models can be used. Secretary of Defense Pete Hegseth demanded that contract terms must allow «any lawful use,» without specific limitations imposed by the company itself.

Anthropic, for its part, insisted on guarantees against Claude being used for mass surveillance of American citizens or in fully autonomous weapon systems. CEO Dario Amodei emphasized that the company is genuinely committed to contributing to American national security, but that certain applications are either outside what is ethically justifiable or beyond what current technology can reliably handle.

«Painting a target on Anthropic makes for juicy headlines, but everyone loses in the end» — retired General Jack Shanahan, former head of the Pentagon's AI initiative

Retired Air Force General Jack Shanahan, who led the Pentagon's AI initiative, described Anthropic's «red lines» as reasonable, according to source material. He warned that the conflict would harm all parties involved.

Trump Bans Anthropic from Pentagon – $200 Million Contract at Risk

Vindictive, Says Anthropic

On February 27, 2026, Secretary of Defense Hegseth formally designated Anthropic as a «national security supply chain threat.» The designation is far-reaching: it not only prohibits direct government contracts but also prevents all suppliers, subcontractors, and partners working with the U.S. military from commercially dealing with Anthropic.

Dario Amodei characterized the decision as «vindictive and punitive» and announced legal action against the supply chain designation, according to the same source.

«The supply chain designation not only directly impacts Anthropic — it threatens all suppliers' ability to use Claude»

Proprietary Models for Intelligence Were Already Operational

A detail that makes the decision particularly noteworthy is that Anthropic had already developed its own customized «Claude Gov» models for U.S. national security actors. According to source material, these were already operational in classified environments and designed for, among other things, improved handling of classified information, cyber operations, and intelligence analysis.

Additionally, Anthropic, through a collaboration with the General Services Administration (GSA), had offered Claude to all three branches of the U.S. government for one dollar per agency per year — a measure to simplify access and standardize usage.

What Happens Now?

The Trump order gives the Department of Defense and other agencies already under Anthropic contracts six months to phase out the technology. Other agencies must cease immediately.

Stratechery points out that the explosive revenue growth in Anthropic's enterprise market makes compromise with the authorities even more crucial for the company. The case is not just a principled debate about AI security — it has direct consequences for a rapidly growing company and for a national security infrastructure that has already become dependent on Claude's capabilities.

The lawsuit Amodei announced will likely be an important test of the limits of state power to designate private AI companies as security risks — and of what technology companies can demand in negotiations with their largest public customers.