OpenAI Enters the Defense Sector

OpenAI has formally entered into an agreement with the U.S. Department of Defense through its subsidiary OpenAI Public Sector LLC. The contract, reported by TechCrunch, has a total value of $200 million and is structured as a one-year "other transaction agreement" focusing on developing prototype capabilities in frontier AI for national security needs.

The contract covers applications ranging from administrative functions — such as improving healthcare access for soldiers and their families — to strengthening cyber defense and streamlining procurement processes. The estimated completion date for the work is set for July 2026, according to available contract information.

$200M
Total contract value
$2M
Obligated R&D funding in the current fiscal year

Altman Promises Concrete Safety Limits

OpenAI CEO Sam Altman emphasizes that the agreement is not without limitations. According to TechCrunch, he claims the contract contains two core safety principles: an explicit prohibition against use for domestic mass surveillance, and a requirement that humans always bear responsibility when using force — which practically excludes fully autonomous weapon systems.

Altman reportedly confirmed that these principles are enshrined directly in the contract text, and that the U.S. Department of Defense has accepted OpenAI's own «red lines» as part of the terms.

OpenAI retains control over which safety mechanisms are implemented, in which regions the models are operated, and for whom they are made available.
OpenAI signs $200M Pentagon contract with built-in safety limits

Technical Barriers to Prevent Misuse

In addition to the contractually stipulated prohibitions, technical limitations have been built into the deployment architecture itself. The models will exclusively run in cloud environments and will not be integrated into so-called «edge» systems such as drones or fighter jets. This is intended to prevent direct connection to autonomous platforms in combat.

OpenAI also plans to embed its own engineers within government teams working on classified projects. The purpose is to ensure that the models are used in accordance with security protocols and ethical guidelines, the company states.

Three layers of safeguards: contractual terms, technical architecture, and embedded OpenAI engineers
OpenAI signs $200M Pentagon contract with built-in safety limits

The Anthropic Conflict as a Backdrop

The timing of OpenAI's announcement is not coincidental. According to research, rival Anthropic recently became embroiled in an open conflict with the Pentagon after refusing to lift its own restrictions against using its Claude models for mass surveillance and autonomous weapons. The result was reportedly that Anthropic was blacklisted from federal contracts by President Donald Trump.

This course of events illustrates the growing tension between AI companies' ethical frameworks and government demands for flexibility in a national security context. OpenAI appears to have chosen a strategy of engagement — but on its own terms.

Critics Warn Against Slippery Slopes

Despite the stated guarantees, there is reason for critical questions. A central concern among surveillance groups is what is referred to as «mission creep» — that AI tools originally developed for administrative or defensive purposes could gradually be adapted for more offensive operations.

It is worth noting that OpenAI already quietly removed the explicit prohibition against «military and warfare» from its terms of use in January 2026. The revised policy still prohibits use to «harm others» or «develop weapons,» but the overarching prohibition clause is gone. Critics believe this prioritizes legal leeway over actual security.

The question of whether OpenAI can genuinely enforce its own red lines against a large state actor like the DoD remains unanswered — and there is currently no documented, independent verification that the described mechanisms function as claimed.