The US Department of Defense announced on Friday, May 1, that seven of the world's leading technology companies have now gained access to deliver AI tools into classified military networks. The agreements mark a new phase in the Pentagon's ambition to build what is referred to as an «AI-first fighting force» – but they also draw a sharp line against the company that was previously the sole supplier in the same systems.

Seven In – One Out

OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, Elon Musk's xAI, and the startup Reflection are all included in the new agreements, according to the Pentagon's announcement reported by The Verge. Common to them is that they can now offer AI capabilities in classified contexts within the US defense.

Anthropic's Claude model was previously the only AI tool available in the Pentagon's classified networks. That position is now lost.

Anthropic was labeled a «supply chain risk» – the first time the term has been used against a US-registered company.

Supply Chain Risk – An Unusual Designation

Behind the exclusion lies a legal and political conflict that escalated dramatically through the winter and spring of 2026. In March, the Pentagon formally declared Anthropic a «supply chain risk» – a term that, according to US law, normally refers to the risk that a foreign power could sabotage or undermine a system from within. Applying it to a domestic, San Francisco-based company is, according to experts cited in research material, unprecedented.

The conflict stemmed from a defense contract worth approximately $200 million. Anthropic refused to grant the military unlimited usage rights to its models for «all lawful purposes,» insisting on retaining security limits intended to prevent use for mass surveillance of American citizens and for autonomous weapon systems without human oversight.

On February 27, President Donald Trump ordered all federal agencies to discontinue the use of Anthropic's technology within six months. Secretary of Defense Pete Hegseth announced shortly thereafter that no supplier, subcontractor, or partner affiliated with the US defense could conduct commercial business with the company.

Legal Battle with Mixed Outcomes

Anthropic has not passively accepted the situation. The company filed lawsuits in two federal courts on March 9, 2026. Judge Rita Lin granted the company a temporary injunction on March 26, noting that the measures appeared «designed to punish Anthropic» rather than address a real supplier risk, according to research material.

Nevertheless, a federal appeals court rejected Anthropic's petition to have the designation lifted. The court argued that forcing the military to continue cooperation with an «unwanted supplier of critical AI services in the midst of an ongoing military conflict» was not appropriate.

Pentagon's Chief Technology Officer Emil Michael confirmed on Friday that Anthropic is still classified as a supply chain risk.

The court emphasized ongoing military operations – an argument that underscores how deeply AI is now integrated into active warfare.

Criticism from Experts

Expert communities are divided in their assessment. Nada Sanders, a professor of supply chain management at Northeastern University, describes the designation as an «unprecedented» move, according to research material, suggesting it may be a tactical negotiation maneuver to shift the balance of power between Silicon Valley and Washington.

Morgan Plummer, Vice President of Policy at Americans for Responsible Innovation, warns that the decision sets a «dangerous precedent» that could stifle innovation and harm companies' willingness to build security limitations into AI systems.

However, questions have also been raised about Anthropic's own principles. The company itself has accepted the use of Claude models for missile and cyber defense, and the models are said to have been used in connection with military operations against Iran. Critics argue this makes the company's ethical framework inconsistent.

Technical Questions Surrounding Claude

In parallel, technical objections have circulated regarding Anthropic's products. Cybersecurity experts, including TrustedSec CEO Dave Kennedy cited in research material, claim that the code quality in Claude Opus models has significantly declined since February 2026, and that security vulnerabilities occur in over half of the cases where the model generates code. It is worth noting that these claims come from a private security firm and have not been independently verified.

Additionally, the source code for Anthropic's developer tool Claude Code was accidentally exposed publicly in March 2026, which, according to research material, led to compromised copies being further distributed.

7
New AI suppliers to the Pentagon
200M USD
Disputed defense contract with Anthropic

NATO Perspective: What Does This Mean for Norway?

For Norway, as a NATO ally and close partner with the US in intelligence and defense cooperation, this development is relevant to follow. The Pentagon's choice of AI suppliers and the technological standards established for classified military systems will shape which tools and protocols will, over time, affect the alliance's common technological infrastructure.

The case also raises fundamental questions not limited to the American market: To what extent can technology companies maintain self-imposed ethical limitations when supplying military clients, and what happens when these limitations clash with the client's demands? This is a discussion that affects European and Norwegian companies and authorities equally.

Regardless, the Pentagon's announcement from Friday clearly indicates that the race to become the preferred AI supplier to the world's most resource-rich military apparatus is in full swing – and that security policy and technology regulation are increasingly becoming one and the same issue.