OpenAI shares Pentagon contract language

Written by Joseph Nordqvist/February 28, 2026 at 10:11 PM UTC

7 min read

OpenAI published excerpts of its agreement with the Department of Defense on Saturday morning. The language is more detailed than expected, yet more ambiguous than it first appears.

On Saturday morning, OpenAI published a blog post titled "Our agreement with the Department of War”, offering what it called a point-by-point explanation of the deal CEO Sam Altman announced late Friday night, hours after Anthropic was blacklisted by the Pentagon for refusing to allow unrestricted military use of its AI.

The blog post is unusual. Companies rarely publish excerpts of classified military contracts. OpenAI chose to, and that choice invites close reading. What follows is a line-by-line look at what the published language actually says, what it claims, and what it leaves open.

Three Red Lines, Not Two

OpenAI says it entered the deal with three conditions, not the two that defined Anthropic's standoff with the Pentagon:

  1. No use of OpenAI technology for mass domestic surveillance.

  2. No use of OpenAI technology to direct autonomous weapons systems.

  3. No use of OpenAI technology for high-stakes automated decisions (e.g., systems such as "social credit").

The first two mirror Anthropic's stated red lines exactly. The third, prohibiting high-stakes automated decision-making, is new. OpenAI does not explain in the post why it added this condition or how it is defined beyond the "social credit" example. It does note that its red lines "are generally shared by several other frontier labs."

"All Lawful Purposes"

What’s perhaps most interesting though is that the contract language OpenAI published opens with the same formulation the Pentagon demanded Anthropic accept, and that Anthropic rejected:

The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.

This was the core of the dispute. Anthropic's CEO Dario Amodei argued in his February 26 statement that an "all lawful purposes" clause was insufficient because existing law has not caught up with AI capabilities — that activities like AI-enabled mass surveillance or autonomous targeting could be characterized as lawful under current statute even if they cross ethical lines Anthropic was unwilling to accept.

OpenAI, on the other hand, has accepted this language.

The Autonomous Weapons Clause

The published contract states:

The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.

The agreement then cites DoD Directive 3000.09 (dated January 25, 2023), which requires "rigorous verification, validation, and testing" before any AI system can be used in autonomous or semi-autonomous weapons.

Two things are worth noting. First, the restriction is conditional: it applies "in any case where law, regulation, or Department policy requires human control." If a future interpretation of policy determined that a particular weapons application did not require human control, this clause would not apply to it. Second, the cited directive (3000.09) is a Department of Defense policy, not a statute. Policies can be revised or rescinded by the Secretary of Defense without congressional action.

OpenAI addresses this partially in its FAQ, arguing that the deal's cloud-only deployment architecture means edge deployment — which would be needed for autonomous weapons — is not possible under the agreement. “The cloud deployment surface covered in our contract would not permit powering fully autonomous weapons, as this would require edge deployment,” the company wrote.

This is a structural argument rather than a legal one: the technology simply will not be deployed in a way that makes autonomous weapons use possible. Whether this architectural constraint holds over the life of the contract is an open question.

The Surveillance Clause

On surveillance, the published language reads:

The AI System shall not be used for unconstrained monitoring of U.S. persons' private information as consistent with these authorities.

The clause cites the Fourth Amendment, the Foreign Intelligence Surveillance Act (FISA), Executive Order 12333, the National Security Act of 1947, and the Posse Comitatus Act.

The word doing the most work here is "unconstrained." The contract does not prohibit monitoring of U.S. persons' private information. It prohibits unconstrained monitoring. A surveillance program that operates under targeting criteria, legal authorization, or internal review processes could plausibly be described as "constrained" even if it sweeps up data on millions of Americans.

This is precisely the gap Anthropic identified. Amodei argued in his statement that the Pentagon's proposed language "would allow those safeguards to be disregarded at will." Whether OpenAI's version of that language is meaningfully different is a question the published excerpts do not fully resolve.

OpenAI's FAQ states flatly: "Based on our safety stack, the contract language, and existing laws that heavily restrict DoW from domestic surveillance, we are confident that this cannot happen." But the contract language itself relies on the government's own interpretation of what constitutes "constrained" versus "unconstrained" activity.

The Future-Proofing Clause

One provision stands out as a genuine concession from the Pentagon. In its FAQ, OpenAI states:

Our contract explicitly references the surveillance and autonomous weapons laws and policies as they exist today, so that even if those laws or policies change in the future, use of our systems must still remain aligned with the current standards reflected in the agreement.

This is significant. It means that if Congress passed a law broadening permissible surveillance, or if a future Defense Secretary revised DoD Directive 3000.09, the OpenAI contract would still be governed by today's legal standards.

It does not, however, protect against reinterpretation. If a future Pentagon official interpreted existing law more permissively than it is interpreted today — concluding, for instance, that a particular form of AI-assisted monitoring constitutes "constrained" surveillance under the Fourth Amendment — the future-proofing clause would not trigger, because the law itself would not have changed.

The Claim About Anthropic's Original Deal

The blog post's most pointed passage is not about OpenAI's own agreement. It's about Anthropic's.

OpenAI writes: “We believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic's original contract.” It adds that “other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments.”

OpenAI does not name Anthropic in the second statement, but the implication is clear: Anthropic's original July 2025 contract, the one that made Anthropic the first frontier AI company to deploy models on the Pentagon's classified network, may have relied more on usage policies than on the kind of technical architecture OpenAI is now advertising.

If true, this would complicate Anthropic's narrative. Anthropic has positioned itself as the company willing to sacrifice a government contract over safety principles. OpenAI is suggesting that those principles were not fully reflected in Anthropic's own earlier arrangement.

Anthropic has not yet responded to this specific claim.

The Enforcement Architecture

Beyond the contract text, OpenAI describes what it calls a "multi-layered" enforcement approach with four components:

  1. Cloud-only deployment. Models run in the cloud, not on edge devices. OpenAI says this prevents autonomous weapons applications.

  2. Retained safety stack. OpenAI keeps full control over its safety systems, including the ability to run and update classifiers. The company says it is "not providing the DoW with 'guardrails off' or non-safety trained models."

  3. Cleared personnel. OpenAI will have cleared engineers embedded with the government, plus cleared safety and alignment researchers "in the loop."

  4. Contract termination. OpenAI states that "as with any contract, we could terminate it if the counterparty violates the terms."

OpenAI's Position on Anthropic's Blacklisting

The blog post explicitly addresses Anthropic's supply-chain risk designation. Under the heading "Do you think Anthropic should be designated as a 'supply chain risk'?" OpenAI answers: "No, and we have made our position on this clear to the government."

OpenAI also posted on X: "We do not think Anthropic should be designated as a supply chain risk and we've made our position on this clear to the Department of War."

The company says it asked the Pentagon, as part of its deal, to "try to resolve things with Anthropic" and to extend the same terms to all AI labs. "We don't know why Anthropic could not reach this deal, and we hope that they and more labs will consider it," the FAQ states.

Joseph Nordqvist

Written by

Joseph Nordqvist

Joseph founded AI News Home in 2026. He studied marketing and later completed a postgraduate program in AI and machine learning (business applications) at UT Austin’s McCombs School of Business. He is now pursuing an MSc in Computer Science at the University of York.

This article was written by the AI News Home editorial team with the assistance of AI-powered research and drafting tools. All analysis, conclusions, and editorial decisions were made by human editors. Read our Editorial Guidelines

References

  1. 1.
    Our agreement with the Department of War, OpenAI, February 28, 2026
    Primary
  2. 2.
    Sam Altman Post@sama, X, February 27, 2026

Was this useful?