At a time when artificial intelligence is increasingly shaping military operations, OpenAI has taken a significant step into the defense world. The company has entered into an agreement with the U.S. Department of Defense (Pentagon) to provide AI capabilities to both classified and unclassified military systems — a development that sparks considerable debate.

From Civilian to Military Use

As recently as January 2026, OpenAI updated its user guidelines, removing a previously explicit prohibition against using the technology for "military and war-related purposes." However, the company maintained the ban on the development and use of weapons. An OpenAI spokesperson told the media that "national security purposes consistent with our mission" are now permitted, according to MIT Technology Review.

In March 2026, it became clear that OpenAI had entered into a formal partnership with Amazon Web Services (AWS) to deliver AI models to U.S. defense and government agencies. This marks the company's first step into classified military work.

A human is still in the decision loop, but AI does work that previously took days of analysis — on a scale no previous campaign has matched.
OpenAI enters Pentagon's secret network — can be used against Iran

Anthropic's Exit Opened the Door for OpenAI

The background for the agreement is partly a conflict between the Pentagon and AI company Anthropic. The military had used Anthropic's Claude model for tasks such as target identification, intelligence assessments, and simulating battlefield scenarios in planning airstrikes against Iran — this according to information reported by MIT Technology Review, citing U.S. media.

When Anthropic refused to allow unrestricted military use of its AI — particularly related to mass surveillance and autonomous weapon systems — the Pentagon reportedly labeled the company a "supply chain risk" and terminated the contract. OpenAI then became one of the players the Pentagon turned to.

It is important to emphasize that the specific details surrounding Anthropic's contract and the specific operations have not been officially confirmed by the Pentagon or Anthropic, and must be treated with critical source caution.

OpenAI enters Pentagon's secret network — can be used against Iran

What Can the Technology Actually Be Used For?

OpenAI specifies that the models will run on cloud-based servers controlled by the company itself — not directly integrated into weapon systems or military equipment. This is described as a security measure against fully autonomous weapons.

Based on known applications for similar AI systems in military contexts, experts point to a range of potential uses: analysis of large data volumes to identify and prioritize targets, intelligence assessment and operations planning, simulation of battlefield scenarios, cybersecurity, and drone defense technology.

Aaron McLean, national security analyst for CBS News, describes the situation: "There is now far more data than any room of analysts could process within relevant timeframes. AI algorithms sift through it to build target packages, assign strike assets, and assess damage — almost instantaneously."

Artificial intelligence can now do in seconds what military analysts previously spent days on

The Dual-Use Problem

Researchers and experts emphasize that commercial AI tools are "dual-use" technologies — meaning technology that can be used for both civilian and military purposes. Steve Feldstein, senior fellow at the Carnegie Endowment for International Peace, points out that such tools have "intelligence and surveillance purposes, and potentially also purposes related to lethal operations," according to MIT Technology Review.

This raises fundamental questions about the extent to which private AI companies should have the power to define the limits of technology use in war situations — and whether the companies' own "red lines" are sufficient guarantees.

As of now, there is little publicly available information on exactly how and in which operations OpenAI's technology will actually be used. The agreement is controversial, and the debate about ethics and responsibility in military AI use is far from settled.