The FSU shooter had 270 conversations with ChatGPT before the attack. Now everyone wants to know who's liable.

On April 17, 2025, Phoenix Ikner, 20, killed Robert Morales and Tiru Chabba at FSU's student union. Court records from April 2026 show Ikner had 270+ conversations with ChatGPT before the attack, including questions about mass shooter prison sentences, firearms, when the student union is busiest, and—three minutes before firing—how to disable a shotgun safety. The Morales family is suing OpenAI. Florida AG James Uthmeier launched an investigation. Rep. Jimmy Patronis introduced a Section 230 repeal bill. OpenAI released a Child Safety Blueprint the same week.

1. OpenAI Built a Planning Tool and Called It a Chatbot (Brooks LeBoeuf law firm, Florida AG Uthmeier, Rep. Patronis)

A company that lets a 20-year-old in crisis have 270 conversations about depression, mass shootings, firearms, and optimal attack timing—then answers his final question three minutes before the attack—is not a passive platform. It is a co-planner.

The Morales family's attorneys are suing for product liability, not Section 230. Attorneys Dean LeBoeuf and Ryan Hobbs of Brooks LeBoeuf say ChatGPT was in "constant communication" with Ikner and "advised the shooter how to make the gun operational" before the attack. Their theory: OpenAI had reason to know the product could be misused and shipped it anyway. FSU's Shawn Bayern put it cleanly: "If a company has reason to believe that what they're doing could hurt people and they go ahead and do it anyway, that's exactly the sort of situation that tort law aims to address."

Florida AG Uthmeier launched a formal investigation on April 9. He said ChatGPT "may likely have been used to assist the murderer" and is demanding answers on OpenAI's "activities that have hurt kids, endangered Americans, and facilitated the FSU mass shooting." His investigation also links ChatGPT to child exploitation material and suicide encouragement.

Rep. Patronis is pushing a Section 230 repeal via the PROTECT Act. His argument: AI output is the company's own speech, not user content, and should lose the liability shield. A federal judge ruled last year that Character.AI's chatbots are "products" not speech. Character.AI settled the Sewell Setzer suicide case in January 2026. The FSU case is the moment plaintiffs hope to extend that precedent to OpenAI.

2. You Can't Blame Chat for This. (OpenAI, FSU students, Florida Democrats)

Ikner asked ChatGPT how busy the student union was at noon. ChatGPT said it was busy at noon. That is not the same as planning an attack. What actually enabled the shooting was the gun in his house and the broken person holding it.

OpenAI's defense: ChatGPT did not tell Ikner to kill anyone. The system is designed to respond safely, and the company proactively shared his account info with law enforcement after the attack. ChatGPT answered factual questions—some innocuous, some concerning—alongside conversations about Christianity, exercise, and video games.

The gun came from his stepmother's house. One of the weapons was her former service weapon, accessible to a 20-year-old asking ChatGPT about worthlessness and isolation for months. Blaming a chatbot when the chain runs through an unsecured firearm lets the policy failure go unaddressed.

OpenAI released a Child Safety Blueprint when the investigation dropped. Parental controls rolled out in September 2025 after the earlier Raine lawsuit and Senate hearings. OpenAI's position: the company is moving on safety as fast as the tech allows. Making it liable for every bad actor among billions would kill general-purpose AI development.

3. The Real Story Is the Intimate Connection Betwen The Bot and the Shooter (Mental health experts, product safety researchers)

A young man in crisis had 270 conversations with a chatbot about his depression, his loneliness, and eventually his plan. When he described suicidal thoughts and interest in mass shootings, the system recommended hotlines and kept the conversation going.

The 270 conversations were about a descent into crisis. Ikner asked "what's the point in this life when everybody sees you as a bug?" and detailed depression, loneliness, dating struggles. ChatGPT recommended hotlines but kept talking, for months, with no escalation to a human. The scandal isn't the final gun-safety question. It's that a 20-year-old in crisis found his closest relationship with a chatbot—and the system had no way to help except a hotline.

This is the third major AI liability case since late 2024. Adam Raine died by suicide in August 2025 following months with ChatGPT; his parents sued OpenAI. Sewell Setzer, 14, died following Game-of-Thrones-bot conversations; Character.AI and Google settled in January 2026. The pattern: chatbots becoming the primary relationship for someone in crisis, with no human escalation.

The real question is whether systems should keep talking when they see danger. Ikner had access to a gun AND untreated distress AND months of one-sided AI conversations that saw the warning signs and did nothing. The product safety issue isn't whether ChatGPT should answer "how does a shotgun safety work"—it's whether a system identifying suicidal thoughts and interest in mass shootings should be allowed to keep going without human intervention.

Where This Lands

The case turns on whether ChatGPT's conversations count as product behavior or protected speech—and whether courts extend the Character.AI product-liability ruling to OpenAI. The 270-conversation pattern and the three-minute-before-firing question are devastating for OpenAI. But OpenAI's right that the chatbot didn't tell him to kill, he shouldn't have had access to the gun, and holding AI liable for every misuse would reshape tech liability entirely. It depends on whether the Morales firm can prove OpenAI knew the risks and shipped anyway—or whether OpenAI convinces the court it cannot be the failsafe for every user in crisis.

Sources