Anthropic was the only AI company with models on the Pentagon's classified network, under a contract worth up to $200 million. The Pentagon demanded Anthropic drop restrictions on autonomous weapons and mass domestic surveillance. Anthropic refused. Friday morning, Trump ordered all federal agencies to stop using Anthropic and Hegseth designated the company a "supply chain risk" -- a label normally reserved for foreign adversaries like Huawei. Friday evening, Sam Altman announced OpenAI had signed a Pentagon deal containing the same two restrictions Anthropic was blacklisted for requesting.
1. Altman Threaded the Needle (Sam Altman, OpenAI, Tech Pragmatists)
Same red lines, different packaging. OpenAI got what Anthropic wanted by not making it a fight.
OpenAI's deal also bans autonomous weapons and mass surveillance -- the exact things Anthropic was punished for insisting on. Altman wrote: "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems." The Pentagon agreed.
The difference is framing, not substance. Anthropic wanted explicit contractual prohibitions. OpenAI agreed the Pentagon could use its tech for "any lawful purpose" while building a "safety stack" -- a layered system of technical, policy, and human controls. The contract also limited deployment to cloud environments, not edge systems like drones or aircraft. OpenAI said the restrictions "reflect existing US law and Pentagon policy." The Pentagon accepted that framing from OpenAI but rejected similar demands from Anthropic.
OpenAI also deployed forward engineers with security clearances to monitor how the Pentagon uses the models. If the model refuses a task, the agreement says the government will not force OpenAI to make it comply. Altman said the Pentagon should offer these same terms to all AI companies.
2. This Was Punishment, Not Policy (Dario Amodei, AI Safety Advocates, 666 AI Employees)
The government blacklisted a US company for negotiating a contract. Then gave the competitor the same terms.
Dario Amodei called the White House's actions "retaliatory and punitive." Anthropic says it will challenge the supply chain risk designation in court, calling it "legally unsound" and an unprecedented action against a US company. Anthropic argues the Defense Secretary lacks the statutory authority to bar everyone who does business with the military from also doing business with Anthropic.
573 Google employees and 93 OpenAI employees signed an open letter titled "We Will Not Be Divided." The letter argues the government is "trying to divide each company with fear that the other will give in." A separate letter to the Pentagon and Congress, signed by prominent tech leaders including 11 OpenAI employees, stated: "We strongly believe the federal government should not retaliate against a private company for declining to accept changes to a contract."
Altman backed Anthropic publicly -- then signed their contract hours later. On Thursday he told OpenAI employees that the company shared the same "red lines" as Anthropic and aimed to "help de-escalate" tensions. In a CNBC interview he said: "For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our war fighters." By Friday evening, he'd announced the deal on X.
3. A Contractor Doesn't Set the Terms (Trump, Hegseth, Defense Hawks)
The military -- not a Silicon Valley company -- decides how defense technology gets used.
Trump's Truth Social post left no ambiguity. He called Anthropic "A RADICAL LEFT, WOKE COMPANY" and wrote: "WE will decide the fate of our Country -- NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about." A Pentagon official told CBS News: "You have to trust your military to do the right thing."
The supply chain risk designation was a message to the entire tech industry. It bars all military contractors, suppliers, and partners from doing business with Anthropic -- effective immediately. Hegseth announced it on X. The Pentagon gave Anthropic 6 months to transition out of classified systems, but the commercial blacklist has no expiration. Normally, this designation targets foreign adversaries. This is the first time it has been used against an American company.
From the administration's perspective, the issue was never the red lines themselves -- it was who gets to impose them. Hegseth framed Anthropic's safety restrictions as ideological posturing that limits national security capabilities. OpenAI accepted that the restrictions reflect existing law and let the Pentagon describe them that way. Anthropic insisted on writing them into the contract as binding terms, which the Pentagon interpreted as a contractor constraining how the military uses what it paid for.
4. The Law Doesn't Say What Altman Claims (Legal Critics, #CancelChatGPT, Amodei's CBS Interview)
OpenAI's "existing law" defense has a hole you could fly a drone through.
Altman's deal rests on the phrase "all lawful purposes" -- but the law hasn't caught up with AI. Amodei gave a concrete example on CBS: the government can legally buy commercial datasets -- social media posts, geolocation data, browsing history -- and run AI analysis across all of it. "That actually isn't illegal," he said. "It was just never useful before the era of AI." Current surveillance statutes were written for wiretaps and search warrants, not for AI systems that can correlate millions of data points purchased on the open market. "All lawful use" sounds like a guardrail. It's actually a gap.
The public sided with Anthropic -- overwhelmingly. Claude surpassed ChatGPT to hit #1 on Apple's App Store. The QuitGPT movement claims 1.5 million people cancelled subscriptions or took action. A Reddit post calling to "Cancel and Delete ChatGPT" hit 30,000 upvotes. Katy Perry posted a screenshot of her new Claude Pro subscription. Anthropic's free signups tripled, breaking records every day that week. The market spoke: consumers don't trust "existing law" to constrain a Pentagon with AI.
By the way, Amodei himself isn't categorically against autonomous weapons. He's said Anthropic believes frontier AI models are "not reliable enough to power fully autonomous weapons" -- not that they should never exist. The red line is technical, not moral. Which means the real disagreement between Anthropic and OpenAI isn't about values. It's about whether you write protections into the contract or hope the law -- written decades before anyone imagined this technology -- holds up.
Where This Lands
Anthropic drew two red lines. The Pentagon blacklisted them. OpenAI drew the same two red lines and got a deal. The difference is packaging -- OpenAI built technical guardrails instead of contract terms, framed restrictions as reflecting existing law, and gave the Pentagon face-saving language. But the law Altman is relying on doesn't actually cover what AI makes possible -- and 1.5 million consumers just voted with their wallets. The open question isn't whether the red lines hold. It's whether "all lawful purposes" means anything when the law was written before the technology existed.
Sources
- CNN: OpenAI Pentagon Deal
- NPR: Trump Anthropic Pentagon
- Fortune: Supply Chain Risk Designation
- CNBC: OpenAI Strikes Deal
- Fortune: OpenAI Pentagon Talks
- Axios: Pentagon Safety Red Lines
- TechCrunch: Technical Safeguards
- The Hill: Pentagon Deal
- CBS: Amodei Retaliatory and Punitive
- TechCrunch: Supply Chain Risk
- CNBC: Altman De-escalate
- Engadget: Employee Open Letter
- CNN: Same Red Lines
- Daily Caller: Trump on Anthropic
- Anthropic: Statement on Department of War
- Al Jazeera: Classified Network
- Fortune: 6 Month Phaseout
- Common Dreams: Anthropic Trump
- Anthropic: Statement on Secretary of War Comments
- CBS: Red Lines Anthropic Would Not Cross
- CNBC: Claude #1 on App Store
- Axios: Claude Beats ChatGPT Downloads
- Axios: Safety Concerns
- TechCrunch: Agreement Details
- Katy Perry: Claude Pro Subscription