The Pentagon wants Claude without guardrails. Anthropic said no. Here's where it stands
Written by Veronica Salvador and Joseph Nordqvist/February 27, 2026 at 1:54 PM UTC
13 min readA rapidly escalating dispute between the U.S. Department of War and Anthropic has become the most consequential confrontation between the federal government and an AI company to date. At stake: a $200 million contract, the future of AI safety guardrails in military applications, and an unprecedented threat to brand an American technology company as a national security liability.
UPDATE — Febuary 28:
What is actually happening
The U.S. Department of War [1] and Anthropic are in a standoff over the terms under which the military can use Claude, Anthropic's frontier AI model.
The Pentagon wants Anthropic to allow Claude to be used for "all lawful purposes" without company-imposed restrictions.[2][3]
Anthropic has refused to remove two specific contractual safeguards: [4]
A prohibition on using Claude for mass domestic surveillance of American citizens. [4]
A prohibition on fully autonomous weapons — systems that select and engage targets without human involvement.[4]
These two safeguards, according to Anthropic, have been part of its contracts with the Department of War since the beginning of the relationship. Anthropic says they have not, to date, been a barrier to the military's use of Claude.[4]
The Pentagon disputes the framing. Chief Pentagon spokesman Sean Parnell stated on February 26 that the Department "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."[2] The Pentagon's position is that legality is the military's responsibility as the end user, and that a private contractor should not impose additional restrictions beyond what the law requires.[3]
The timeline
The conflict has been building for months, but escalated sharply in the final week of February 2026.
February 16: Axios first reported that Defense Secretary Hegseth was "close" to cutting business ties with Anthropic and designating the company a "supply chain risk."[7] A senior Pentagon official told Axios the designation would mean that any company wanting to do business with the U.S. military would have to cut ties with Anthropic.[7]
February 24 (Tuesday): Hegseth met with Anthropic CEO Dario Amodei at the Pentagon.[3][5][6] Sources described the meeting as cordial in tone,[5] but Hegseth delivered an ultimatum: agree to the Pentagon's terms by Friday at 5:01 PM ET, or face consequences.[2][3] Those consequences included canceling Anthropic's $200 million contract, invoking the Defense Production Act to compel Anthropic to provide its technology on the Pentagon's terms, and designating Anthropic a "supply chain risk."[3][5][6]
February 25 (Wednesday): The Pentagon reached out to Boeing and Lockheed Martin, asking both defense contractors to assess their reliance on Anthropic's Claude model.[8] Boeing's defense division reported it has no active contracts with Anthropic; a Boeing executive told Axios that the company had previously sought a partnership with Anthropic but "could not come to an agreement" because Anthropic "was somewhat reluctant to work with the defense industry."[8]
February 26 (Thursday): Significant developments occurred in rapid succession.
Anthropic said the contract language received overnight from the Pentagon made "virtually no progress" on its two core safeguards, with new compromise language paired with legalese that would allow those safeguards to be "disregarded at will.[9]
Second, Parnell posted on X reiterating the Friday deadline: "They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."[2]
Third, Dario Amodei published a formal statement on Anthropic's website.[4] (More on this below.)
Also on Thursday, Under Secretary of War Emil Michael posted on X calling Amodei a "liar" with a "God-complex," writing that Amodei "wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk."[10][11]
February 27 (Friday): The deadline day. As of this writing, no resolution has been announced. An open letter titled "We Will Not Be Divided," signed by at least 266 Google employees and 65 OpenAI employees, expressed support for Anthropic's position and urged Google and OpenAI leadership to refuse similar Pentagon demands.[12][13] The letter stated: "The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War."[12]
According to the WSJ, OpenAI CEO Sam Altman said in an internal note to staff that OpenAI was exploring a deal with the Department of War to deploy its models in classified environments, but would seek contract terms that exclude "those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons". [16]No deal has been signed, and talks could still fall through, according to WSJ. Altman said he hoped to help broker a resolution between the two sides. [16]
UPDATE — February 27, 5:15 PM ET
Minutes after the 5:01 PM deadline passed, President Trump posted a statement on social media which was shared on the official White House X account, directing "every Federal Agency in the United States Government to immediately cease all use of Anthropic's technology." The statement included a six-month phase-out period for agencies currently using Anthropic's products and threatened to use "the full Power of the Presidency" to compel compliance, warning of "major civil and criminal consequences".
The scope of this directive extends well beyond the Department of War contract that was the subject of negotiations. Anthropic's Claude is deployed across multiple federal agencies, not just the Pentagon. It is not yet clear whether this statement constitutes a formal executive order or what legal mechanism would enforce it.
The statement characterized Anthropic as attempting to "strong-arm the Department of War, and force them to obey their Terms of Service instead of our Constitution." Anthropic has not yet responded publicly.
Pete Hegseth posted the following shortly after on X:
Amodei's statement
On the evening of February 26, Dario Amodei published a detailed statement on Anthropic's website titled "Statement from Dario Amodei on our discussions with the Department of War."[4]
The statement opened by affirming Anthropic's commitment to national security, noting that the company was the first frontier AI company to deploy models on the U.S. government's classified networks, the first to deploy at the National Laboratories, and the first to provide custom models for national security customers.[4] Amodei also cited Anthropic's decision to forgo several hundred million dollars in revenue by cutting off access to Claude for firms linked to the Chinese Communist Party.[4]
Amodei then laid out the company's two objections in detail.
On mass domestic surveillance, Amodei wrote that AI-driven mass surveillance "presents serious, novel risks to our fundamental liberties," noting that current law has not caught up with AI's capabilities. He cited existing practices — such as the government purchasing detailed records of Americans' movements, web browsing, and associations from public sources without a warrant — that the Intelligence Community itself has acknowledged raise privacy concerns.[4]
On fully autonomous weapons, Amodei wrote that while partially autonomous weapons (such as those used in Ukraine) are "vital to the defense of democracy," and even fully autonomous weapons "may prove critical" in the future, current AI systems "are simply not reliable enough" to power them. He noted that Anthropic had offered to work directly with the Department of War on R&D to improve reliability, but the offer was not accepted.[4]
Amodei then noted a logical tension in the Pentagon's threats: "These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security."[4]
He concluded: "Regardless, these threats do not change our position: we cannot in good conscience accede to their request."[4] If the Pentagon chooses to end the relationship, Amodei said Anthropic would "work to enable a smooth transition to another provider."[4]
What is a "supply chain risk" designation?
The supply chain risk designation is a mechanism typically reserved for foreign adversary technology.[7][8] The most prominent prior example is the Chinese telecommunications company Huawei.[3][8] According to Geoffrey Gertz, a senior fellow at the Center for a New American Security, using this designation against an American company would be unprecedented.[1]
The practical impact is uncertain. At minimum, it could prohibit other Pentagon contractors from using Anthropic's tools in their military work. At maximum, it could prohibit them from using Anthropic's tools at all — a scenario Gertz described to NPR as "particularly damaging."[1]
Katie Sweeten, a former liaison between the Justice Department and the Department of Defense, told CNN she found the simultaneous use of the supply chain risk designation and the Defense Production Act difficult to reconcile: "I would assume we don't want to utilize the technology that is the supply chain risk, right? What it sounds like is that the supply chain risk may not be a legitimate claim, but more punitive because they're not acquiescing."[5]
The Defense Production Act question
The Pentagon's threat to invoke the Defense Production Act (DPA) has raised significant legal questions. The DPA is a Korean War-era statute that gives the president broad authority to direct private industry in the name of national defense.[14] It was most recently extended through September 2026.[14]
Legal scholar Alan Z. Rozenshtein, an Associate Professor of Law at the University of Minnesota Law School writing in Lawfare, analyzed the legal complexities in detail.[14] He noted that the DPA has two functionally different compulsion powers: a "queue-jumping" power (giving the government priority access to existing products) and a compelled-contracting power (potentially forcing a company to accept new work under the government's terms).[14]
The legal analysis, Rozenshtein argued, depends on what the government is actually demanding. If the Pentagon wants to change the contract terms but keep using the same product, the legal ground is ambiguous. If it wants to force Anthropic to retrain Claude to strip out safety restrictions entirely, the legal questions become even more complex — there is debate over whether the DPA authorizes the government to compel a company to create a product it does not currently make.[14]
Rozenshtein also noted an irony: the Biden administration's use of the DPA's reporting powers (Title VII) against AI companies drew sharp Republican criticism at the time. Hegseth's threatened use of Title I compulsion powers is "orders of magnitude more coercive."[14]
His broader conclusion: "This fight is happening because Congress hasn't set substantive rules for military AI. If Congress had legislated guidelines on autonomous weapons and surveillance, Anthropic would likely be far more comfortable selling its systems to the military—and the DPA threat would have never arisen."[14]
The Pentagon's position
Michael told CBS News the Defense Department would "put it in writing that we're specifically acknowledging" the relevant federal laws, and would acknowledge existing Pentagon policies regarding autonomous weapons.[15] He also said the military invited Anthropic to participate in its AI ethics board.[15]
Second, that a private company should not dictate operational terms to the military. Michael stated: "At some level, you have to trust your military to do the right thing."[15] He also said the Pentagon needs to "be prepared for the future" and for "what China is doing”.[15]
A senior administration official told Axios earlier this month that competing models "are just behind" Claude in specialized government applications, making an abrupt switch complicated.[7]
Anthropic's position
Anthropic's position is that its safeguards are narrow, specific, and do not interfere with the military's operational needs.[4]
The company argues that even if mass surveillance is technically illegal, the law has not kept pace with AI's capabilities, and contractual safeguards provide an additional layer of protection.[4] On autonomous weapons, Anthropic's argument is primarily technical: current AI systems fully autonomous weapons cannot be relied upon.[5]
Anthropic has emphasized that it supports military use of AI broadly and is willing to work with the Pentagon on all other applications. In his statement, Amodei wrote: "We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner."[4]
Written by
Veronica Salvador
Veronica Salvador is an editor at AI News Home, where she covers enterprise AI, emerging models, and the business impact of artificial intelligence. She recently completed UT Austin's Post Graduate Program in Generative AI for Business.
Co-authored by
Joseph Nordqvist
Joseph founded AI News Home in 2026. He studied marketing and later completed a postgraduate program in AI and machine learning (business applications) at UT Austin’s McCombs School of Business. He is now pursuing an MSc in Computer Science at the University of York.
This article was written by the AI News Home editorial team with the assistance of AI-powered research and drafting tools. All analysis, conclusions, and editorial decisions were made by human editors. Read our Editorial Guidelines
References
- 1.
Deadline looms as Anthropic rejects Pentagon demands it remove AI safeguards — Geoff Brumfiel, Shannon Bond, NPR, February 26, 2026
Primary - 2.
Sean Parnell post on X reiterating Friday deadline — Sean Parnell, X (formerly Twitter), February 26, 2026
- 3.
Anthropic Says It Cannot ‘Accede’ to Pentagon in Talks Over A.I. — Julian E. Barnes and Sheera Frenkel, NYT, February 26, 2026
- 4.
Statement from Dario Amodei on our discussions with the Department of War, Anthropic, February 26, 2026
Primary - 5.
Pentagon threatens to make Anthropic a pariah if it refuses to drop AI guardrails, CNN, February 24, 2026
- 6.
Anthropic offered Pentagon the ability to use AI systems for missile defense — Jared Perlo, Gordon Lubold, NBC News, February 25, 2026
- 7.
- 8.
- 9.
- 10.
AI giant Anthropic says it "cannot in good conscience" agree to Pentagon demands — Lee Ferran, Sydney J. Freedberg Jr., Breaking Defense, February 26, 2026
- 11.
Emil Michael posts on X regarding Anthropic — Emil Michael, X (formerly Twitter) via Breaking Defense, February 26, 2026
- 12.
Open letter urges Google and OpenAI to join Anthropic's red lines — Ina Fried, Axios, February 27, 2026
- 13.
OpenAI and Google staffers sign petition seeking limits on Pentagon's AI use — Siladitya Ray, Forbes, February 27, 2026
- 14.
What the Defense Production Act Can and Can't Do to Anthropic — Alan Z. Rozenshtein, Lawfare, February 25, 2026
- 15.
As Pentagon-Anthropic feud risks boiling over, military says it's made compromises to AI giant — Jennifer Jacobs, Joe Walsh, CBS News, February 26, 2026
- 16.
Altman Says OpenAI Is Working on Pentagon Deal Amid Anthropic Standoff — Keach Hagey, WSJ, February 27, 2026
Was this useful?
Related
OpenAI shares Pentagon contract language
February 28, 2026
OpenAI strikes Pentagon deal hours after Anthropic blacklisted — with seemingly the same terms Anthropic was punished for requesting
February 28, 2026
OpenAI adds Lockdown Mode and “Elevated Risk” labels
February 17, 2026
International AI Safety Report Warns Oversight Is Lagging
February 5, 2026