OpenAI CEO Sam Altman wrote a letter on April 23 apologizing to the community of Tumbler Ridge, British Columbia, after the company failed to alert law enforcement to the ChatGPT account of Jesse Van Rootselaar — the 18-year-old who killed eight people and injured more than 25 in a February 10 mass shooting. The shooter's ChatGPT account had been banned eight months earlier, in June 2025, for violating ChatGPT usage policy. OpenAI considered referring the account to police at the time and determined the content did not meet its threshold for "imminent and credible risk of serious physical harm to others." Altman's letter: "I am deeply sorry that we did not alert law enforcement to the account that was banned in June." BC Premier David Eby called the apology "necessary" but "grossly insufficient." OpenAI faces a civil lawsuit from the family of a critically injured student.
1. The Apology Is Real And The Policy Is Changing (Altman, OpenAI)
An apology that comes with a policy change is the right corporate response. The threshold for police referral was wrong; OpenAI is fixing it.
A signed, public letter that names the wrong decision is the right kind of apology. The Altman letter to Tumbler Ridge is signed, public, specific, and addressed directly to the families. It identifies the exact decision OpenAI made — not flagging the June account ban to police — and the exact judgment that turned out to be wrong: that the content didn't meet the "imminent and credible risk" threshold under existing policy. Altman did not deflect to the user, the platform's terms of service, or Section 230. The decision OpenAI made about Van Rootselaar's account is the decision OpenAI is apologizing for.
The letter commits to specific policy reforms, not a "we take this seriously" boilerplate. Altman pledged to lower the threshold for law-enforcement referral, expand the categories of content that trigger automatic review, and create clearer escalation pathways from automated moderation to human review. That is meaningfully more concrete than the typical tech-CEO post. The Tumbler Ridge incident appears to be the inflection point where OpenAI's safety architecture moves from "ban accounts" to "ban accounts and tell police when warranted."
2. An Apology Doesn't Bring Back The Dead (David Eby, Tumbler Ridge community)
"Necessary but grossly insufficient."
Necessary is not the same as sufficient. Premier Eby called Altman's apology "necessary" — it had to come, and OpenAI was right to issue it — but said it is "grossly insufficient for the devastation done to the families of Tumbler Ridge." The math is unforgiving: ChatGPT had Van Rootselaar's conversations on file, banned the account for violating policy, weighed whether to alert police, and decided no. Eight months later, five children, an education assistant, an eleven-year-old, and a mother were dead. The apology arrives in April. The decision was made in June.
Tumbler Ridge is now Canada's deadliest school shooting since the 1989 École Polytechnique massacre. The town has approximately 2,400 people. Tumbler Ridge Secondary School is the only high school. From the community's vantage, an apology from a San Francisco AI executive, however contrite, does not address the distance between OpenAI's risk-modeling and what actually happened. Civil lawsuits will. The family of a critically injured student is already suing OpenAI for failing to alert authorities.
3. AI Platforms Don't Have A Clear Duty To Warn Yet (lawsuit, AI policy critics)
The Tumbler Ridge case is going to test whether tech platforms can be held legally responsible for not flagging users who go on to commit violence.
Section 230 doesn't clearly cover AI conversation platforms. Existing US tech-platform legal protections were written for user-generated-content services like message boards and social media, not for AI conversation systems where the platform itself is generating the content alongside the user. The Tumbler Ridge lawsuit will test whether OpenAI — which knew Van Rootselaar's account was problematic enough to ban, and weighed whether to alert police — has an affirmative legal duty to warn that other tech platforms haven't faced before.
Reactive policy is the pattern. OpenAI released safety updates in 2025 after a teenager's ChatGPT conversations were linked to a suicide. Each incident has produced a specific policy response, but the pattern is reactive: an event reveals an inadequate threshold, the threshold gets lowered, and the next case tests the lower threshold. The Tumbler Ridge case is the first time a mass-casualty event has been tied to an account OpenAI banned but didn't report — which means a court, not just OpenAI, is now likely to define what AI platforms owe to the public when they decide a user is too dangerous to keep but not dangerous enough to flag.
Where This Lands
Three takes: OpenAI did the right thing in admitting the failure and specifying the policy change; the apology is morally inadequate to the loss; and the underlying legal framework for AI platforms' duty to warn is being defined now, in real time, by a wrongful-death-adjacent civil suit. Where this lands depends on what OpenAI's revised policy actually looks like, and on how the Tumbler Ridge lawsuit proceeds.
Sources
- CBS News, "OpenAI CEO Sam Altman 'deeply sorry' for failing to alert law enforcement to Canada school shooter's ChatGPT account"
- CBC News, "OpenAI's Sam Altman writes apology to community of Tumbler Ridge"
- TechCrunch, "OpenAI CEO apologizes to Tumbler Ridge community"
- Globe and Mail, "OpenAI's Altman 'deeply sorry' company didn't flag Tumbler Ridge shooter's messages to police"
- The Hill, "Altman says OpenAI 'deeply sorry' for not flagging Canadian school shooter's ChatGPT posts"
- CNN, "OpenAI's Sam Altman apologizes to Canadian community after failing to flag mass shooter's conversations with its AI chatbot"
- Global News, "OpenAI CEO apologizes to Tumbler Ridge for not alerting police about shooter's account"
- US News, "OpenAI Chief Apologizes for Not Reporting Shooting Suspect to Police"
- WION, "Sam Altman admits OpenAI didn't report Canada mass shooter's ChatGPT account"
- Silicon Report, "Altman's Tumbler Ridge apology turns an OpenAI ban into a test of police escalation"