On February 27, Sam Altman sent a memo to OpenAI employees saying the company's "main red lines" were mass surveillance, autonomous lethal weapons, and humans staying in the loop for high-stakes decisions. He publicly said he agreed with Anthropic's position. Hours later Altman took over Anthropic's Pentagon deal worth up to $200 million. Over 1.5 million users have since joined the QuitGPT boycott. Claude surged to #1 on the App Store. And on March 3, Altman admitted the deal "just looked opportunistic and sloppy."
1. He Did What He Always Does (Pattern Critics, OpenAI Insiders)
The Pentagon deal isn't an anomaly. It's the latest version of the same move Altman has been running since 2023.
The Pentagon deal is dangerous. Leo Gao, an OpenAI alignment researcher posted on X that contrary to Altman's safety assurances, the actual deal let's the Pentagon do whatever it wants; the safety language is just "window dressing." AI academics around the world agree, and safety researcher Jasmine Wang requested "independent legal counsel" to analyze whether the revised contract actually solidified the red lines Altman claimed.
Let's not forget Altman's 2023 board firing. Helen Toner, a former board member, alleged Altman withheld information about ChatGPT's release (the board "learned about ChatGPT on Twitter"), withheld information about his ownership of the OpenAI startup fund, and provided "inaccurate information about the small number of formal safety processes." The board had the receipts, with screenshots showing his toxicity. He was even accused of outright lying to the board.
Why didn't he stand with Anthropic? A number of commentators have said that he should have stood with Anthropic. It could have been really powerful if all the major AI companies drew the same red lines.
2. Altman Acted for the Public Good (Altman's Defense, Pentagon Officials)
The Pentagon was escalating its attacks on Anthropic. Altman argues he stepped in to prevent something worse — for everyone, including Anthropic.
Altman's core claim is that he stopped the escalation. To him, the fight "potentially threatened to damage the AI industry as a whole" — the Pentagon could have nationalized an AI lab or coerced a private company. His framing: "If we are right and this does lead to a de-escalation between the DOW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry."
We certainly do have the red lines. OpenAI renegotiated the deal on March 3 with three explicit red lines: no mass domestic surveillance, no autonomous weapons systems, and no access for intelligence agencies. These are the same categories Anthropic insisted on. Altman points to this as evidence the deal created a framework rather than surrendering one.
At the all-hands meeting, Altman framed military use as inherently outside OpenAI's control. "So maybe you think the Iran strike was good and the Venezuela invasion was bad. You don't get to weigh in on that." Pentagon Undersecretary Emil Michael had accused Anthropic's Dario Amodei of having a "God-complex" for wanting to "personally control the US Military." Altman's defense leans into the same logic — it's not OpenAI's job to run the Pentagon.
3. The Market Is Deciding (Users, Business Analysts)
1.5 million boycott sign-ups and a flipped App Store suggest the public isn't waiting for the industry to sort this out.
Claude surged to #1 on Apple's U.S. App Store, displacing ChatGPT for the first time. Free active users on Claude increased 60% since January, daily sign-ups quadrupled, and paying subscribers more than doubled in 2026. ChatGPT uninstalls spiked 295% in a single day after the Pentagon announcement, and one-star App Store reviews surged 775% on the Saturday after.
The enterprise market was already shifting before the Pentagon controversy. Anthropic holds 32% of the enterprise LLM market, compared to OpenAI's 25% — a reversal from 2023, when OpenAI held 50% and Anthropic had 12%. Bloomberg's analysis: Altman's "mishandled Pentagon deal works in Anthropic's favor." MIT Technology Review called the revised deal "what Anthropic feared" — a compromise that normalizes military AI use on terms weaker than what Anthropic demanded.
Whether the heat is "fair" may be beside the point. Altman admitted the optics were bad. His own employees are publicly dissenting. And 1.5 million people signed up to boycott his product in the space of a week. The question isn't whether Altman deserves the criticism — it's whether OpenAI can afford it.
Where This Lands
Altman's defenders say he did the hard, ugly thing to prevent something worse — government coercion of the entire AI industry. His critics say this is exactly what the board warned about in 2023: a leader who says one thing and does another when the stakes get high enough. The market is siding with the critics, at least for now. Claude is #1 on the App Store, OpenAI's enterprise lead has evaporated, and Altman's own alignment researchers are calling his safety language "window dressing." The revised deal with its three red lines may eventually look like responsible statesmanship. Or it may look like the moment Sam Altman taught a generation of AI users that principles are negotiable.
Sources
- NBC News: OpenAI strikes Pentagon deal hours after Trump bans Anthropic
- NPR: Trump bans Anthropic, OpenAI gets Pentagon deal
- CNBC: Altman tells staff operational decisions up to government
- CNBC: OpenAI Pentagon deal amended with surveillance limits
- Fortune: Altman defends Pentagon deal amid backlash
- Fortune: Altman renegotiating Pentagon deal
- Fortune: Claude hits #1, 1.5M QuitGPT users
- Fortune: Helen Toner on why Altman was fired
- Fortune: AI kingpins have themselves to blame
- CNN Business: OpenAI staff reactions to Pentagon deal
- CBS News: Dario Amodei on red lines Anthropic wouldn't cross
- Axios: Claude overtakes ChatGPT in app store
- TheInsaneApp: 1.5M users joined QuitGPT
- TechCrunch: OpenAI Pentagon deal with technical safeguards
- TechCrunch: Enterprises prefer Anthropic's models
- Bloomberg: Altman's mishandled deal works in Anthropic's favor
- MIT Technology Review: OpenAI's compromise is what Anthropic feared