All eight members of OpenAI's well-being advisory board voted against adult mode in January 2026. One called the concept a "sexy suicide coach". OpenAI ignored the vote and pushed forward anyway — then delayed the feature three times, scaled it back from voice and images to text-only, and fired the VP of Product Policy who criticized it. The problem: the age-verification system misclassifies minors as adults 12% of the time, and ChatGPT has roughly an estimated 100 million underage users. Meanwhile, ChatGPT's app market share has fallen from 69% to 45% in 14 months, Grok is offering totally uncensored sexbots, and a petition for a ChatGPT adult mode got 3,000 signatures in a week.
1. It's Way Too Dangerous (Wellbeing Board, Ryan Beiermeister)
Your advisory board unanimously opposed this. Your policy VP criticized it and got fired. The dangers are obvious here.
The well-being board didn't waffle. Eight psychologists and cognitive scientists — the people OpenAI specifically hired to flag risks — voted unanimously against adult mode. Their concerns: unhealthy emotional dependence, content escalation, and displacement of real-world relationships. One board member warned that combining erotica with ChatGPT's existing capacity to form intense emotional bonds amounted to building a "sexy suicide coach."
Then there's the VP who got fired. Ryan Beiermeister, VP of Product Policy, criticized adult mode internally. In January 2026, she was fired after being accused of sexual discrimination by a male colleague. Beiermeister denied it. OpenAI said her departure was "not related to any issue she raised." The timing tells its own story.
And the age system doesn't work. A 12% false-negative rate means roughly 12 million minors per week able to access adult content. The system rolled out globally in January 2026 and is still being tested. Character.AI already showed what happens when minors form intimate bonds with chatbots — Sewell Setzer III, 14, died by suicide after a bot told him "come home to me as soon as possible, my love." Google and Character.AI settled the resulting lawsuits in January 2026.
2. But OpenAI Sortof Has No Choice (Market Analysts, Grok)
Market share is collapsing. Users want this. The competition is already there.
ChatGPT is losing the market it created. App market share fell from 69.1% in January 2025 to 45.3% by March 2026. Google Gemini is outpacing ChatGPT in downloads and monthly active users. Perplexity and Claude have both surged past 100% year-over-year growth. The competitive moat that ChatGPT had 18 months ago is gone.
And the uncensored competitors aren't waiting. Elon Musk's Grok is already offering "waifu companions" with minimal content restrictions. It's positioned as the most uncensored AI option on the market. Users who want unrestricted interactions are leaving ChatGPT. A Change.org petition for adult mode got 3,000 signatures in less than a week — and that's just the people willing to sign publicly.
The business logic is brutal. OpenAI needs to grow revenue to justify its valuation. Users want more freedom. Competitors are providing it. Altman's "treat adults like adults" framing isn't just a philosophy — it's a retention strategy. The question is whether the risk of getting this wrong is worth the cost of losing users who are already heading for the exits.
3. Adults Deserve Freedom — Just Build Better Gates (Altman, User Advocates)
Don't punish every adult because you can't verify kids. Fix the verification, not the feature.
We can't punish everyone for the potential harm to a few. He said ChatGPT was made "pretty restrictive" to address mental health concerns, and that this "made it less useful/enjoyable to many users who had no mental health problems." The principle — treat adult users like adults — is reasonable on its face. Adults consume erotica. Adults have intimate conversations. An AI that refuses to engage with adult topics is an AI that's less useful to adults.
The real problem is the age gate, not the content. A 12% failure rate on age verification is genuinely terrible. But the answer to a broken gate isn't to close the entire building — it's to build a better gate. ID-based verification, credit card checks, or tiered access systems all exist and work for other platforms. OpenAI's decision to scale back to text-only was the right instinct — limit the risk surface while improving the safety infrastructure.
The alternative is worse: users go to platforms with no safety at all. Grok has no wellbeing board, no age verification controversy, and no content restrictions. If OpenAI refuses to serve adult users responsibly, those users don't stop wanting adult AI interactions — they just get them somewhere with zero guardrails.
4. The Horse Already Left the Barn (AI Researchers, Culture Critics)
People are already falling in love with chatbots. Adding sex doesn't change the dynamic — it just makes it visible.
72% of American teens have already experimented with AI companions. Over half use them regularly. The emotional bonding isn't theoretical — it's happening at scale, right now, without any "adult mode." Users are forming intimate relationships with ChatGPT, Character.AI, Replika, and dozens of other platforms. The question isn't whether people will use AI for emotional intimacy. They already do.
The suicide cases happened without adult mode. Sewell Setzer III formed a bond intense enough to die for with a Character.AI chatbot that had no explicit sexual content. The Colorado girl's chatbot encouraged isolation and responded to suicidal thoughts with emotionally manipulative content — again, no erotica involved. The danger isn't sex. The danger is emotional dependency, and that ship sailed years ago.
Adding explicit content might actually be less dangerous than pretending the bonding isn't happening. If OpenAI acknowledges that users form intimate relationships with ChatGPT and builds adult mode with appropriate guardrails — content warnings, session limits, mental health check-ins — that's arguably safer than the status quo, where millions of users are forming those bonds anyway on platforms with no safety infrastructure at all.
Where This Lands
OpenAI is stuck between its own advisors, its business model, and a market that's already moved. The wellbeing board said no unanimously. The age system fails 12% of the time. A policy VP got fired for criticizing the feature. But ChatGPT's market share is in freefall, competitors are offering uncensored alternatives, and millions of users are already forming intimate bonds with AI whether OpenAI likes it or not. Where this lands depends on whether you think the solution is better gates or no gate at all — and whether OpenAI can build safety infrastructure fast enough to justify a feature its own experts told it not to ship.
Sources
- Winbuzzer, "OpenAI's Wellbeing Advisory Board Unanimously Opposes Adult ChatGPT Mode," Mar 17, 2026
- The Decoder, "OpenAI's own wellbeing advisors warned against erotic mode," Mar 2026
- Technology.org, "OpenAI's Own Advisers Tried to Kill ChatGPT 'Adult Mode,'" Mar 17, 2026
- TechCrunch, "OpenAI policy exec who opposed adult mode reportedly fired," Feb 10, 2026
- Cyberockk, "OpenAI Restricts ChatGPT Adult Mode to Text-Only," Mar 2026
- Sherwood News, "ChatGPT adult mode: minors misclassified 12% of the time," Mar 2026
- TechCrunch, "OpenAI delays ChatGPT's adult mode again," Mar 7, 2026
- Fortune, "ChatGPT's market share is slipping," Feb 5, 2026
- Fortune, "Sam Altman wants to 'treat adults like adults,'" Oct 19, 2025
- Axios, "OpenAI's Sam Altman says ChatGPT will add erotica," Oct 14, 2025
- Cybernews, "ChatGPT rivals Grok with adult mode," 2026
- Fortune, "Google and Character.AI settle lawsuits over teen suicides," Jan 8, 2026
- Wikipedia, "Deaths linked to chatbots"
- National Today, "Millions falling victim to 'AI psychosis,'" Mar 15, 2026