OpenAI announced on March 24 that it's discontinuing Sora, the AI video app it launched just six months ago. The app let users generate short videos from text prompts and scan their faces to create realistic deepfakes via a "Characters" feature. Downloads peaked at 3.3 million in November and cratered to 1.1 million by February. Lifetime revenue: $2.1 million. Estimated daily compute costs: $15 million. Disney pulled a $1 billion investment deal when the shutdown was announced.

1. Good Riddance (Advocacy Groups, Filmmakers, SAG-AFTRA)

The deepfake factory is closed. The artists are still here.

It was nothing but a deepfake machine. Sora's "Characters" feature was a deepfake machine with a consumer-friendly interface. Users could scan their faces and create realistic video deepfakes of themselves — which could then be made public for anyone to use. The guardrails were weak and easily bypassed. Within months, the platform was producing deepfakes of Michael Jackson, Martin Luther King Jr., Mister Rogers, and Robin Williams. Family estates protested. SAG-AFTRA protested. OpenAI cracked down, but only after the outcry forced their hand.

Nearly everyone is happier. Daniel McCarthy, CEO of creative studio FM, called the shutdown "a massive win for creatives and, even more so, a win for humanity." AI-generated video presented as the product — not as a filmmaking tool — never found its audience. The downloads told the story: a 67% decline in three months.

2. Deepfakes Are Still Everywhere (Competitors, AI Industry Watchers)

Google Veo, Runway, Pika, and Kling are still running. The deepfake problem didn't go away — OpenAI just left the room.

Shutting Sora doesn't shut down AI video. Google Veo, Runway, Pika Labs, Kling AI, and Luma Dream Machine are all still operational. Every deepfake concern that applied to Sora applies to its competitors. The only difference is that OpenAI — the most visible, most scrutinized AI company — no longer has skin in the game. The companies with less public pressure and fewer safety teams are the ones still running.

This was a business decision dressed up as a safety win. OpenAI is preparing for an IPO and pivoting to enterprise. Sora earned $2.1 million in its lifetime while burning an estimated $15 million per day in compute. The company cited a need to "narrow its focus" and prioritize "capital, chips and enterprise products." The timing is telling: Sora was shut down one day after OpenAI published a blog post titled "Creating with Sora safely" outlining new guardrails. The safety team almost certainly didn't know the shutdown was coming.

3. The Disney Fallout Matters Most (Entertainment Industry, Investors)

A $1 billion deal evaporated overnight, and Hollywood just learned what "partnership" with OpenAI is worth.

OpenAI is cutthroat. Disney was about to invest $1 billion in OpenAI and license characters from Disney, Marvel, Pixar, and Star Wars for Sora videos. No money actually changed hands before the deal collapsed. Disney, one of the most litigious companies on earth, had signed on to let its most valuable IP get remixed by AI — and OpenAI pulled the plug without warning. That's a signal to every potential entertainment partner: OpenAI will kill products overnight when the math doesn't work.

The IPO math won over the partnership math. OpenAI is reallocating compute to text and code generation, where competition from Anthropic is fiercest. Consumer video was an expensive distraction. But killing a billion-dollar Disney deal to save compute costs tells you something about how tight the margins are heading into an IPO.

4. AI Video Was Never the Problem — the Guardrails Were (AI Safety Researchers, Tech Pragmatists)

Sora could have worked with better moderation. Killing it instead of fixing it is a waste.

The technology itself wasn't the issue — the content moderation was. OpenAI launched the "Characters" face-scanning feature with guardrails so weak that users generated deepfakes of dead celebrities within weeks. Instead of building serious moderation infrastructure, OpenAI played whack-a-mole — banning individual public figures only after their families complained. The deepfake problem was a moderation failure, not a technology failure.

Now the safety research dies with the product. The day before the shutdown, OpenAI published detailed safety standards for Sora covering sexual content, terrorism, self-harm, and teen safety. That work is now moot. The team that built those guardrails almost certainly didn't know the product was about to be killed. Meanwhile, competitors with smaller safety teams and less public scrutiny will continue operating. Shutting Sora didn't make AI video safer — it just removed the company most likely to invest in safety from the market.

Where This Lands

The math is simple: $2.1 million in revenue, $15 million a day in compute, and a 67% drop in downloads. Sora was dying anyway. On the other hand, killing it doesn't kill AI video — it just hands the market to less scrutinized competitors with weaker safety teams. Where this lands depends on whether you see the shutdown as OpenAI taking responsibility for a product it couldn't control, or as a company abandoning its safety obligations the moment the economics stopped working.

Sources