Anthropic, which built one of the most powerful AI models in the world, refused to let the Pentagon use it without limits. Within hours, Trump called Anthropic "Leftwing nut jobs" and "a radical left, woke company," directed every federal agency to stop using its technology, and Hegseth slapped it with a "supply chain risk" designation -- a label historically reserved for foreign adversaries like Huawei. Anthropic is the first known American company to receive it. Ten days later, Anthropic sued. Then Microsoft, which has $5 billion invested in Anthropic, appeared in court to support it. So did 37 employees of OpenAI and Google DeepMind -- including Google's chief scientist Jeff Dean. The question at the center of this is deceptively simple: can an AI company tell the military no?
1. Anthropic Red Lines Are Important (Anthropic, Kalinowski, AI Researchers)
The company asked for two things: no mass surveillance of Americans, and no killer robots without a human in the loop. That shouldn't be controversial.
Anthropic's position was narrow and specific. The company didn't refuse to work with the military -- it had a $200 million Pentagon contract and was the department's preferred AI vendor. It asked for two contractual red lines: Claude would not be used for mass surveillance of US citizens, and would not power autonomous weapons that kill without human approval. Everything else -- intelligence analysis, logistics, communications, cybersecurity -- was fine.
The Pentagon wanted an ambiguous "all lawful purposes" catch-all. That's the crux: "all lawful purposes" is broad enough to include surveillance programs and autonomous targeting systems that are technically legal but that Anthropic believes shouldn't exist yet. Amodei wrote: "These threats do not change our position: we cannot in good conscience accede to their request."
People inside OpenAI agreed. OpenAI swooped in and took the Pentagon deal Anthropic rejected, and its own hardware lead Caitlin Kalinowski resigned. Her statement: "Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got." Then 37 employees of OpenAI and Google DeepMind, including Google chief scientist Jeff Dean, filed an amicus brief supporting Anthropic's lawsuit, warning the blacklist threatens the entire AI industry.
2. A Company Doesn't Get to Dictate National Security (Trump Admin, Hegseth, Emil Michael)
The military can't let a private company veto how it fights wars. If you don't want to serve, step aside -- but don't pretend you're saving democracy.
Trump framed this as sovereignty. "THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!" he posted on Truth Social. The argument: the Commander-in-Chief and Pentagon leadership decide how to use tools of war, not a San Francisco startup. If Anthropic doesn't want the contract, fine -- but it doesn't get to keep the contract while restricting what the military can do.
The personal attacks signaled how seriously the administration took this. Emil Michael, Under Secretary of Defense for Research and Engineering, publicly called Amodei "a liar" with "a God-complex." Hegseth's ultimatum was not a negotiating tactic. It was a threat the Pentagon followed through on.
Designating Anthropic a supply chain risk was outrageously extreme. It doesn't just end Anthropic's Pentagon contract. It requires every defense vendor and contractor to certify they don't use Claude in any Pentagon work. That's a cascading ban that reaches far beyond one contract. The Pentagon used the most extreme tool available -- one typically reserved for Chinese telecom companies -- against an American AI leader.
3. This Hurts the Military More Than Anthropic (Microsoft, Former Military Officials, Defense Startups)
Microsoft has $5 billion invested in Anthropic and integrates its models into military systems. Ripping out Claude overnight doesn't punish the company. It punishes warfighters.
Microsoft's amicus brief was blunt. The company warned that without a restraining order, contractors would be forced to immediately reconfigure existing products, potentially "hampering US warfighters at a critical point in time." Microsoft, which invested up to $5 billion in Anthropic and integrates its models into military systems, asked the court to block the designation to enable a more orderly transition and avoid disrupting the American military. It was the first standalone company to file in Anthropic's support.
22 former high-ranking military officials agreed. They filed their own amicus brief supporting Anthropic's injunction request. They warned that the abrupt removal of Claude from defense systems creates operational risk.
And the broader damage may be worse than the immediate disruption. TechCrunch reported that the Pentagon's treatment of Anthropic may scare other startups away from defense work entirely -- undermining the government's own push to modernize with commercial AI. If the message is "agree to everything or get blacklisted," the companies most worried about safety are the ones least likely to bid on future contracts.
4. OpenAI Took the Deal -- and the Fine Print Is Ugly (The Intercept, EFF, Brad Carson)
OpenAI said it got the same safeguards Anthropic wanted. Critics say the contract language is vague, unenforceable, and secret.
OpenAI announced its own Pentagon deal hours after Anthropic's ban. Sam Altman later admitted it "looked opportunistic and sloppy." The deal initially accepted the "all lawful purposes" language Anthropic had rejected. After backlash, OpenAI and the Pentagon amended the contract on March 2-3 to add surveillance limits.
But no one has seen the actual contract. The Intercept's headline: "OpenAI on Surveillance and Autonomous Killings: You're Going to Have to Trust Us." Brad Carson, former Under Secretary of the Army under Obama, said: "I'm not confident in the language at all. And in some parts I don't even believe it." Another former Pentagon official called the carve-outs "the get out of jail free card" that "gives them enough flexibility to still do whatever the fuck they want." The EFF called the contract language "weasel words."
The consumer reaction told its own story. ChatGPT uninstalls surged 295% the day after the deal was announced. Claude hit #1 on the US App Store. The #QuitGPT movement claimed 1.5 million pledge signers by March 5. The backlash was short-lived -- ChatGPT reclaimed the top spot by March 9 -- but it showed that a significant slice of AI consumers care about these red lines even when the Pentagon doesn't.
Where This Lands
Anthropic's two lawsuits are now before the courts, and the coalition backing them -- Microsoft, Google's chief scientist, 22 former military officials, OpenAI's own employees -- is broader than anyone expected. On one hand, the administration's argument is straightforward: the military decides how to fight wars, and no company gets veto power. On the other hand, using a designation reserved for foreign adversaries against America's leading AI safety company -- while the competitor that took the deal quietly amends its contract after backlash -- raises a question the courts will have to answer. Where this lands depends on whether judges see Anthropic's red lines as a principled stand the government punished unlawfully, or a company that overplayed its hand and lost.
Sources
- https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario
- https://www.cnn.com/2026/02/24/tech/hegseth-anthropic-ai-military-amodei
- https://fortune.com/2026/02/25/defense-secretary-pete-hegseth-meets-anthropic-ceo-dario-amodei-woke-ai/
- https://www.axios.com/2026/02/26/anthropic-rejects-pentagon-ai-terms
- https://www.cnbc.com/2026/02/27/trump-anthropic-ai-pentagon.html
- https://thehill.com/policy/defense/5758772-pentagon-emil-michael-anthropic-dario-amodei-criticism/
- https://fortune.com/2026/02/27/pentagon-brands-anthropic-ceo-dario-amodei-a-liar-with-a-god-complex-as-deadline-looms-over-ai-use-in-weapons-and-surveillance/
- https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban
- https://techcrunch.com/2026/03/04/anthropic-ceo-dario-amodei-calls-openais-messaging-around-military-deal-straight-up-lies-report-says/
- https://www.cnbc.com/2026/03/03/openai-sam-altman-pentagon-deal-amended-surveillance-limits.html
- https://fortune.com/2026/03/06/anthropic-openai-ceo-apologizes-leaked-memo-supply-chain-risk-designation/
- https://theintercept.com/2026/03/08/openai-anthropic-military-contract-ethics-surveillance/
- https://www.eff.org/deeplinks/2026/03/weasel-words-openais-pentagon-deal-wont-stop-ai-powered-surveillance
- https://techcrunch.com/2026/03/07/openai-robotics-lead-caitlin-kalinowski-quits-in-response-to-pentagon-deal/
- https://www.npr.org/2026/03/08/nx-s1-5741779/openai-resigns-ai-pentagon-guardrails-military
- https://techcrunch.com/2026/03/02/chatgpt-uninstalls-surged-by-295-after-dod-deal/
- https://www.axios.com/2026/03/01/anthropic-claude-chatgpt-app-downloads-pentagon
- https://www.cnbc.com/2026/03/09/anthropic-trump-claude-ai-supply-chain-risk.html
- https://www.cnn.com/2026/03/09/tech/anthropic-sues-pentagon
- https://www.cnbc.com/2026/03/10/microsoft-says-court-should-temporarily-block-pentagon-ban-anthropic.html
- https://thehill.com/policy/technology/5777742-microsoft-backs-anthropic-ai-case/
- https://fortune.com/2026/03/10/google-openai-employees-back-anthropic-legal-fight-military-use-of-ai/
- https://techcrunch.com/2026/03/09/openai-and-google-employees-rush-to-anthropics-defense-in-dod-lawsuit/
- https://www.cnbc.com/2026/03/10/google-deepens-pentagon-ai-push-after-anthropic-sues-trump-admin.html
- https://techcrunch.com/2026/03/08/will-the-pentagons-anthropic-controversy-scare-startups-away-from-defense-work/
- https://www.cnbc.com/2026/03/09/anthropic-was-the-pentagons-choice-for-ai-now-its-banned-and-experts-are-worried.html
- https://fortune.com/2026/03/05/anthropic-openai-feud-pentagon-dispute-ai-safety-dilemma-personalities/