High-Level Confrontation
U.S. Secretary of Defense Pete Hegseth has summoned Anthropic founder and CEO Dario Amodei to a meeting at the Pentagon. The topic is the military's use of the AI model Claude, and the atmosphere is described as tense. According to TechCrunch, Hegseth has gone so far as to threaten to label Anthropic as a “supply chain risk” – a designation that could have major consequences for the company's ability to work with U.S. authorities.
At its core, the conflict revolves around a fundamental contradiction: Anthropic has built its company around strict ethical guidelines for AI use, while the Pentagon desires more flexible terms for military purposes.
The Venezuela Operation That Triggered the Crisis
The matter gained momentum after the Wall Street Journal reported in February 2026 that the U.S. military reportedly used Claude during an operation aimed at apprehending former Venezuelan President Nicolás Maduro. The connection allegedly went through Anthropic's partnership with data analytics firm Palantir Technologies.
Anthropic itself denied that the company had been involved in planning “specific operations” with the Department of Defense. However, the company confirmed that its usage agreement with the Pentagon is currently under review – particularly the provisions regarding autonomous weapon systems and domestic mass surveillance.
No actor should be allowed to dictate guidelines that go beyond what Congress has enacted.
These are, according to TechCrunch, the words of the Pentagon's Chief Technology Officer, and they illustrate how serious the conflict has become.

What Do Anthropic's Own Rules Say?
Anthropic is in an unusual position among major AI companies: they have explicit “red lines” enshrined in their terms of service that prohibit a range of military and surveillance-related applications of Claude.
The company operates as a “public benefit corporation” – a corporate form that obliges them to consider the public good, not just shareholders. The “Constitutional AI” framework is a central part of this: Claude is trained against a set of ethical principles derived from, among other things, the UN Universal Declaration of Human Rights.

Contracts at Stake
In the summer of 2025, Anthropic, Google, OpenAI, and xAI were all awarded defense contracts worth up to $200 million each to adapt generative AI solutions for military use. Now, Anthropic's share of these contracts could be at risk.
The Pentagon's Chief Technology Officer has been clear that companies cannot set their own conditions that override congressional decisions. This puts Anthropic in a bind: yielding to pressure would undermine the company's core profile as a responsible AI developer, but standing firm could cost them significant revenue and influence.
A Company Built on Safety
Anthropic has invested heavily in building a reputation for safety and responsibility. In January 2025, the company was certified according to the international standard ISO/IEC 42001:2023 for AI management systems. They also have a “Responsible Scaling Policy” with clear thresholds for when model development must be halted if the security risk is deemed too high.
Dario Amodei himself has publicly warned that AI can “continuously lower the threshold for destructive activity,” and has argued that AI safety and high performance are not contradictions.
The meeting between Amodei and Hegseth will likely be crucial for whether Anthropic manages to navigate between its fundamental ethical obligations and the pressure from the world's most powerful military apparatus.
