A rare paradox is unfolding in Washington: While the Department of Defense has cracked down hard on AI company Anthropic, declaring it a national security risk, officials linked to the Trump administration are now actively encouraging financial institutions to adopt the company's new AI model. TechCrunch reports.
Pentagon Said No — Financial Sector Gets Green Light?
Apparently without internal coordination within the administration, the Trump circle has reportedly encouraged banks to test Anthropic's Mythos model. This occurs in direct contradiction to the Department of Defense's official stance, according to TechCrunch.
The DoD recently classified Anthropic as a «supply chain risk to national security» — a designation that until now has been used exclusively against foreign entities. The consequences are severe: The military is prohibited from using Anthropic's AI models, and defense contractors must confirm in writing that they do not use the company's technology in work for the government.
«America's warfighters will never be held hostage by the ideological whims of Big Tech» — Secretary of Defense Pete Hegseth

The Core of the Conflict: Ethical Limits Against Military Use
The reason for the dramatic classification is not technological flaws or security vulnerabilities in the traditional sense. It concerns a principled disagreement: Anthropic refused to grant the Pentagon access to its Claude model without restrictions against its use for mass surveillance of civilians and fully autonomous weapon systems that select and attack targets without human control.
The Pentagon, for its part, demanded the right to use the technology for «all lawful purposes.» Secretary of Defense Pete Hegseth claimed that Anthropic was trying to «bring the American military to its knees.»
Anthropic's CEO Dario Amodei rejects this characterization. He emphasizes that the company's concerns are about overarching use cases — not daily operational decision-making — and that AI systems are not infallible enough to be granted full autonomy.
Anthropic's own lawyers have called the DoD decision «legally untenable» and announced a legal battle, claiming it sets «a dangerous precedent for any American company negotiating with the government.»

Chaotic Signal to the Financial Sector
The apparent lack of internal coordination in Washington sends confusing signals — not least to players in the financial sector who must deal with supplier risk and regulatory expectations.
Banks and financial institutions are generally subject to strict requirements for supplier control. In Europe and Norway, requirements are imposed, among others, through the DORA regulation (Digital Operational Resilience Act) that financial undertakings must assess the risk of third-party providers of critical technology — including AI. The fact that a company's technology is classified as a national security risk by one arm of a government, while another arm recommends it, is a scenario for which there is no established handling protocol.
Anthropic's Infrastructure and Expansion
Despite the conflict with the Pentagon, Anthropic is a well-capitalized and rapidly growing player. The company currently uses processing power from Google (TPUs), Amazon (Trainium and Inferentia), and Nvidia (GPUs). According to available information, Anthropic recently entered into a long-term collaboration with Google and Broadcom on specialized AI hardware, with plans to install 3.5 gigawatts of computing capacity in data centers in New York and Texas.
Total investments in American computing infrastructure are estimated at around $50 billion. The company is also considering designing its own processor chips to reduce external dependence.
Norwegian Banks Should Pay Attention
For Norwegian and Nordic financial institutions, the matter is more than an American internal drama. The choice of AI supplier is today a strategic and regulatory question. The situation surrounding Anthropic illustrates that even well-established, privately funded AI companies can become subject to political pressure that directly affects which terms of use can be relied upon over time — and that supplier risk is not just about technical reliability, but about the companies' political position in their home markets.
The source for this article is TechCrunch (April 12, 2026), as well as additional information on the DoD classification and Anthropic's infrastructure from open sources.
