OpenAI strikes Pentagon deal hours after Anthropic blacklisted — with seemingly the same terms Anthropic was punished for requesting
Written by Joseph Nordqvist/February 28, 2026 at 8:28 PM UTC
11 min read- 1.Anthropic blacklisted for demanding no mass surveillance and no autonomous weapons in Pentagon contract
- 2.OpenAI signed deal hours later with the same two red lines — but different enforcement mechanism
- 3.OpenAI published contract language: opens with "all lawful purposes," restricts "unconstrained monitoring" — key terms Anthropic argued are too vague
- 4.First time U.S. has ever designated an American company a supply-chain risk; Anthropic vows to sue
- 5.700+ Google and OpenAI employees backed Anthropic; OpenAI itself opposes the blacklisting and the designation of Anthropic as a "supply chain risk"
OpenAI CEO Sam Altman announced late Friday night that his company has reached an agreement with the Department of Defense to deploy its AI models on the Pentagon's classified network.
The announcement came just hours after President Donald Trump ordered every federal agency to stop using rival Anthropic's technology, and Defense Secretary Pete Hegseth designated Anthropic a "supply-chain risk to national security," which is a classification normally reserved for companies linked to foreign adversaries.
The stated reason for Anthropic's punishment was its refusal to allow the Pentagon unrestricted use of its Claude AI model. Anthropic had insisted on two conditions: that its technology not be used for mass surveillance of American citizens, and not be deployed in fully autonomous weapons systems that make lethal decisions without human oversight.
OpenAI's new deal, according to Altman, includes protections addressing those same two issues.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. “The DoW [Department of War] agrees with these principles, reflects them in law and policy, and we put them into our agreement”.
"It is the central question this story raises: if the Pentagon was willing to accept these restrictions from OpenAI, why was Anthropic blacklisted for requesting them? OpenAI has now published its own explanation, but whether it holds up depends on details that cut both ways.
The company also posted on X that it does not believe Anthropic should be designated as a supply chain risk.
How the Anthropic standoff began
The dispute traces back to January, when the U.S. military's operation to capture Venezuelan President Nicolás Maduro reportedly involved Anthropic's Claude model, deployed through a partnership with defense contractor Palantir. According to a senior Pentagon official, an Anthropic employee allegedly raised concerns with a Palantir executive about how Claude had been used in the operation.
The incident led to uncertainty within the Pentagon about its relationship with Anthropic. A senior administration official was quoted by Axios as saying that ending the the partnership with Anthropic was “on the table,” adding that there'll have to be an orderly replacement [for] them, if we think that's the right answer."
By mid-February, negotiations between Anthropic and the Pentagon had soured. The Defense Department demanded that Anthropic agree to let the military use Claude for "all lawful purposes." Anthropic was willing to support military applications but wanted two explicit contractual carve-outs: no mass domestic surveillance and no fully autonomous weapons. The Pentagon viewed this as an unacceptable restriction. On February 24, Hegseth gave Anthropic CEO Dario Amodei a hard deadline: agree by 5:01 PM on Friday, February 27, or face consequences.
Anthropic holds its position
On Thursday, February 26, Anthropic said it had received revised contract language from the Pentagon but found that it made no progress on its concerns. The company said the new language was “paired with legalese that would allow those safeguards to be disregarded at will”.
In a lengthy public statement, Amodei refused the Pentagon's terms. “These threats do not change our position: we cannot in good conscience accede to their request,” he wrote. He argued that the company's two conditions had “never been included in our contracts with the Department of War” previously, had not affected a single military mission to date, and addressed use cases that are “simply outside the bounds of what today's technology can safely and reliably do”.
The response from Emil Michael, the Pentagon's Undersecretary for Research and Engineering, was striking in its tone. “It's a shame that @DarioAmodei is a liar and has a God-complex,” Michael wrote on X. “He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk”.
In an exclusive CBS News interview that aired Friday evening, Amodei pushed back against the framing that his company was being un-American. “We believe that crossing those lines is contrary to American values, and we wanted to stand up for American values,” he said. He added: “Disagreeing with the government is the most American thing in the world”.
Friday: Blacklisting, then a new deal
Events moved quickly on February 27. Trump posted on Truth Social calling Anthropic “Leftwing nut jobs” and directing all federal agencies to “IMMEDIATELY CEASE” using the company's products, with a six-month phase-out for agencies like the Pentagon that rely on Claude in classified settings.
Hegseth's statement went further. He accused Anthropic of “duplicity” and of attempting to “seize veto power over the operational decisions of the United States military,” calling the company's position “fundamentally incompatible with American principles.” He designated Anthropic a supply-chain risk, declaring that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic”.
This marked the first time the United States has ever publicly applied the supply-chain risk designation to an American company.
Then, just hours later, Altman posted that OpenAI had reached its own agreement with the Pentagon — one that he said included the same two red lines Anthropic had been insisting on.
What OpenAI's Contract Actually Says
On Saturday morning, OpenAI published a detailed blog post titled "Our agreement with the Department of War," which included excerpts of the actual contract language and a point-by-point explanation of its approach.
The blog post confirms several key details. OpenAI says it has three red lines, not two, adding a prohibition on “high-stakes automated decisions (e.g. systems such as 'social credit')” alongside the surveillance and autonomous weapons restrictions that Anthropic also demanded.
The published contract language opens with the same formulation the Pentagon demanded of Anthropic: “The Department of War may use the AI System for all lawful purposes.” But it then adds specific restrictions tied to existing law and DoD policy. On autonomous weapons, it states the AI system "will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control," citing DoD Directive 3000.09, which requires rigorous testing before any AI system can be used in autonomous or semi-autonomous weapons.
On surveillance, the language prohibits use of the system for “unconstrained monitoring of U.S. persons' private information,” citing the Fourth Amendment, the Foreign Intelligence Surveillance Act, Executive Order 12333, and the Posse Comitatus Act.
OpenAI also disclosed a notable future-proofing clause: even if the underlying laws or Pentagon policies change, use of OpenAI's systems "must still remain aligned with the current standards reflected in the agreement”. This partially addresses Anthropic's core concern, that a future official could reinterpret existing law to permit what today's law prohibits.
Beyond the contract text, OpenAI said it negotiated what it calls a "multi-layered" enforcement approach, according to Fortune's Sharon Goldman. Goldman reported that at an OpenAI all-hands meeting on Friday afternoon, Altman told employees the government was willing to let OpenAI build its own “safety stack” (a layered system of technical, policy, and human controls) and that if the model refuses to perform a task, the government would not force OpenAI to override it.
Stronger Than Anthropic's Original Deal?
In the blog post, OpenAI went further than simply defending its own agreement. It explicitly claimed that its contract “provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic's original contract”. The company also appeared to take a pointed shot at Anthropic's previous arrangement, writing that "other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments".
This raises a question Anthropic may wish to address: whether its original July 2025 contract with the Pentagon actually had weaker enforcement mechanisms than what OpenAI has now secured. OpenAI's implication is that Anthropic's earlier deal relied more on usage policies than on the kind of technical architecture — cloud-only deployment, embedded engineers, a retained safety stack, that OpenAI is now touting.
OpenAI also explicitly stated that it opposes Anthropic's supply-chain risk designation (“No, and we have made our position on this clear to the government”) and said it asked the Pentagon, as part of its deal, to "try to resolve things with Anthropic" and to extend the same terms to all AI labs.
But the contract language OpenAI published also invites scrutiny.
However, the word "unconstrained" in OpenAI's surveillance clause is worth noting. The contract bans “unconstrained monitoring of U.S. persons' private information” — but what counts as constrained? A program that collects publicly available social media data and geolocation records on millions of Americans, filtered through targeting criteria, could plausibly be described as “constrained” while still constituting what many would consider mass surveillance. Anthropic wanted language that would close that interpretive gap. OpenAI's contract, while more detailed than what was previously known, still relies on the government's own characterization of what current law permits.
The future-proofing clause is a meaningful concession. locking the contract to today's legal standards even if tomorrow's change. But it only protects against future changes to law. It does not protect against the scenario Anthropic was most concerned about: a future Pentagon official interpreting existing law more permissively than it is interpreted today.
Industry solidarity — up to a point
The week leading up to the deadline saw a rare display of cross-company solidarity. More than 700 employees across Google and OpenAI signed an open letter titled “We Will Not Be Divided,” in which employees expressed hope that their leaders will “put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight”.

The majority of signatories were Google employees, with the remainder from OpenAI.
More than 100 Google AI staffers sent a separate internal letter to Chief Scientist Jeff Dean requesting limits on military use of Google's Gemini models.
Altman himself publicly backed Anthropic on Friday morning, telling CNBC he did not “personally think the Pentagon should be threatening DPA [the Defense Production Act] against these companies” and that he shared Anthropic's red lines. In an internal memo to staff the night before, he had written: “This is a case where it's important to me that we do the right thing, not the easy thing that looks strong but is disingenuous.”
By Friday night, OpenAI had signed a deal to replace Anthropic on the Pentagon's classified network.
At the OpenAI all-hands, according to Fortune, a company official said the relationship between Anthropic and the government had broken down in part because Amodei had offended Department of War leadership by publishing blog posts that upset the department.
The legal and business fallout
Even with the contract language now public, the central tension identified by Anthropic has not fully disappeared.
Anthropic has said it will challenge the designation in court. The company has argued that the designation can only extend to the use of Claude on Department of War contracts and cannot restrict how defense contractors use Anthropic products to serve other customers.
Anthropic, valued at $380 billion following a recent $30 billion funding round, earns approximately $14 billion in annualized revenue. The $200 million Pentagon contract itself is a small fraction of that, but the ripple effects of the supply-chain designation could reach much further — potentially affecting Amazon, Google, and Nvidia, all of which have invested billions in the company and may face questions about their own Pentagon work.
Senator Mark Warner, vice chair of the Senate Select Committee on Intelligence, condemned the administration's approach. “The president’s directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations,” Warner said.
What happens next
Anthropic has vowed to continue operating normally for commercial customers and says it will fight the designation in court. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company said Friday evening.
OpenAI, meanwhile, has positioned itself as both a principled actor and a willing partner — publicly endorsing Anthropic's red lines, publishing excerpts of its own contract, and securing the deal Anthropic lost.
Whether the Pentagon will genuinely honor the same restrictions it punished Anthropic for requesting remains an open question. OpenAI has published select contract language, but the full agreement remains undisclosed, and Anthropic has not yet responded to OpenAI's claim that its original deal had weaker safeguards.
There is, at least, an irony that is difficult to overlook. The Pentagon spent a week publicly attacking Anthropic's position as “arrogant,” “sanctimonious,” and “fundamentally incompatible with American principles.” It designated an American AI company, the first frontier AI company ever to deploy models on classified military networks, as a national security threat. It did all of this over two restrictions that, by Saturday morning, its new AI partner had published in a blog post for the world to read — embedded in a contract whose opening clause is the very formulation Anthropic rejected: "The Department of War may use the AI System for all lawful purposes"
Written by
Joseph Nordqvist
Joseph founded AI News Home in 2026. He studied marketing and later completed a postgraduate program in AI and machine learning (business applications) at UT Austin’s McCombs School of Business. He is now pursuing an MSc in Computer Science at the University of York.
View all articles →This article was written by the AI News Home editorial team with the assistance of AI-powered research and drafting tools. All analysis, conclusions, and editorial decisions were made by human editors. Read our Editorial Guidelines
References
- 1.
- 2.
Donald Trump, Truth Social post directing agencies to cease Anthropic use — Donald Trump, Truth Social, February 27, 2026
- 3.
Pete Hegseth, statement on X designating Anthropic a supply-chain risk — Pete Hegseth, X (Twitter), February 27, 2026
- 4.
OpenAI sweeps in to snag Pentagon contract after Anthropic labeled 'supply chain risk' in unprecedented move — Jeremy Kahn, Fortune, February 28, 2026
- 5.
Statement from Dario Amodei on our discussions with the Department of War — Dario Amodei, Anthropic, February 26, 2026
- 6.
- 7.
Tensions between the Pentagon and AI giant Anthropic reach a boiling point, NBC News, February 23, 2026
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
OpenAI's Sam Altman announces Pentagon deal with 'technical safeguards' — Anthony Ha, TechCrunch, February 28, 2026
- 14.
- 15.
- 16.
OpenAI strikes a deal with the Pentagon, just hours after Trump orders end to Anthropic contracts — Sharon Goldman, Fortune, February 27, 2026
- 17.
Employees at Google and OpenAI support Anthropic's Pentagon stand in open letter, TechCrunch, February 27, 2026
- 18.
Google Workers Seek ‘Red Lines’ on Military A.I., Echoing Anthropic, The New York Times, February 27, 2026
- 19.
- 20.
- 21.
- 22.
Warner Condemns Pentagon Pressure Campaign, Raises Concerns about Politicized Directive Targeting Anthropic — Sen. Mark R. Warner (D-VA), February 27, 2026
- 23.
- 24.
Was this useful?