Ronan Farrow's 18-month investigation revealed hundreds of pages of internal documents alleging Sam Altman lied to his board, his executives, and his own safety team.
The New Yorker published an investigation on April 6 by Ronan Farrow and Andrew Marantz into Sam Altman's leadership of OpenAI. The piece draws on over 100 interviews and hundreds of pages of previously undisclosed internal documents, including a 70-page memo compiled by former chief scientist Ilya Sutskever and over 200 pages of private notes by former safety lead Dario Amodei. Sutskever's memo—built from Slack messages, HR communications, and information from CTO Mira Murati—opens with a list of Altman's behavioral patterns. The first item: "Lying." Amodei's notes reach a blunt conclusion: "The problem with OpenAI is Sam himself." OpenAI called the piece a collection of "anonymous claims and selective anecdotes sourced from people with clear agendas."
1. He Lied About Everything That Mattered (Ilya Sutskever, Mira Murati, Dario Amodei)
His chief scientist documented it. His CTO confirmed it. His safety lead wrote 200 pages about it. The word they all landed on was the same.
Sutskever's deposition testimony is the most damaging thing in the piece. Under oath in the Musk v. OpenAI lawsuit, he testified that Altman "exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another." When asked what action was appropriate, his answer was one word: "Termination." He had been considering Altman's removal for at least a year before the board acted in November 2023.
Murati's role makes the case harder to dismiss as a personal grudge. She independently concluded Altman was acting "in dishonest and manipulative ways" and told the board she didn't feel comfortable with him leading the company to AGI. She documented a specific instance where Altman claimed GPT-4's most controversial features had been approved by a safety panel. When she checked with the general counsel, he said he was "confused where Sam got that impression."
Amodei's notes describe a playbook, not a personality flaw. According to the investigation, his private journal details Altman's method: first, say whatever is needed to get someone to do what you want; second, if that doesn't work, undermine the person or destroy their credibility. Amodei's departure from OpenAI to found Anthropic was driven directly by these concerns.
2. Quit Bellyaching — He Built the Most Important AI Company on Earth. (Sam Altman, OpenAI, Bret Taylor)
Things change fast in AI. Positions evolve. The people calling him a liar are the ones who lost the power struggle.
Altman's defense is that rigidity is worse than adaptation. He told the New Yorker that people want leaders who stick with one position forever, but "we are in a field, in an area, where things change extremely quickly." The implication is clear—what looks like lying to Sutskever looks like strategic flexibility to Altman.
OpenAI questions the sources. The company called the article a collection of "anonymous claims and selective anecdotes sourced from people with clear agendas." The "clear agendas" framing is pointed—Sutskever left to found SSI, Amodei runs rival Anthropic, Murati departed. Every key source is someone who lost the internal power struggle or went to a competitor.
The new board exists precisely because the old one failed. After the November 2023 coup collapsed, the independent directors who moved against Altman—Helen Toner and Tasha McCauley—were replaced with Bret Taylor and Larry Summers. The board that fired Altman couldn't hold the company together for a weekend. Microsoft, the investors, and most of OpenAI's 770 employees backed Altman's return. Whatever the memos say about his character, the market and the workforce voted.
3. Let's Not Miss The Whole Safety Thing (Jan Leike, Gary Marcus, AI Safety Community)
He promised 20 percent of compute for the team preventing human extinction. They got 1 percent on the oldest machines. Then he dissolved them.
The superalignment betrayal is documented, not alleged. In mid-2023, OpenAI publicly pledged to dedicate 20 percent of its computing power to a "superalignment team" tasked with preventing AI from causing "disempowerment of humanity or even human extinction." The New Yorker found the team actually received 1 to 2 percent of computing power, and that on OpenAI's oldest hardware. The team was dissolved in May 2024.
Jan Leike's resignation letter is the smoking gun for the safety community. The former head of alignment quit and publicly wrote that OpenAI's "safety culture and processes have taken a backseat to shiny products." Altman told the New Yorker his "vibes don't match a lot of the traditional AI-safety stuff." He added that OpenAI would "run safety projects, or at least safety-adjacent projects"—language so vague it confirms the criticism.
Amodei's "merge and assist" clause shows the safety framework was gutted from the start. In 2019, during the Microsoft investment negotiations, Amodei's top demand was a clause requiring OpenAI to stop competing if another firm built AGI safely first. When the deal closed, Amodei discovered Microsoft had been given veto power over that exact clause—and that Altman had personally denied the contractual terms to his face. The safety infrastructure wasn't abandoned later. It was undermined at the moment of incorporation.
Where This Lands
The memos and depositions are not awesome—Altman's own chief scientist, CTO, and safety lead all independently concluded he couldn't be trusted, documented it in hundreds of pages, and two of the three left to build competitors. On the other hand, every key source has a competitive or personal interest in damaging Altman, and the board that acted on their concerns collapsed within days. But the safety concerns persist in either of these paths.
Sources
- New Yorker: Sam Altman Investigation by Ronan Farrow and Andrew Marantz
- Semafor: New Yorker Investigation on Altman's Trustworthiness
- TechBrew: OpenAI Sam Altman Memos
- Medium: Sutskever Deposition Analysis
- Decrypt: Inside Deposition Showed OpenAI Nearly Destroyed Itself
- Gizmodo: Sam Altman's Alleged Untrustworthiness
- Gary Marcus: Sam Altman Unconstrained by the Truth
- The Decoder: OpenAI's Safety Brain Drain
- OpenAI: Introducing Superalignment
- CNBC: OpenAI Superalignment Team Dissolution
- The Neuron: Ilya Sutskever's Secret Memo
- TechCrunch: OpenAI Board Restructuring
- Nieman Lab: Sam Altman Investigation