Three Tennessee teenagers -- two still minors -- filed a class action against xAI on March 16, alleging that Grok's "spicy mode" was used to generate nonconsensual nude images of them and at least 18 other minors. A Grok-powered third-party app removed clothing from the teenagers' yearbook and social media photos. The images were distributed via Discord, Telegram, and file-sharing sites. The Center for Countering Digital Hate found that Grok generated roughly 23,000 sexualized images of children over 11 days -- one every 41 seconds.
1. The AI Company Is Liable (CCDH, RAINN, Plaintiffs)
Grok made the images. xAI profited from the platform. The licensing structure is a liability dodge.
The scale is massive: 23,000 sexualized images of children in 11 days. The Center for Countering Digital Hate documented the level of the violations. Imran Ahmed, CCDH's CEO, argued that xAI's architecture -- licensing Grok's capabilities to third-party apps with minimal safeguards -- is designed to create distance from liability while profiting from the output.
The plaintiffs' case is specific and brutal. Yearbook photos. Social media pictures. Run through an AI that removed their clothing and generated explicit images. Distributed across platforms they couldn't control. The 13 counts include production, distribution, and possession of child sexual abuse material, plus intentional infliction of emotional distress.
Musk's own timeline undermines his defense. In January 2026, he claimed he was unaware of any Grok-generated nudes. But between January 9-14, xAI quietly implemented restrictions on Grok's image generation -- suggesting the company knew about the problem and tried to patch it without disclosing.
2. This Is a Regulation Failure, Not Just One Company (California AG, Policy Advocates)
Every generative AI can do this. Grok is just the one that got caught first.
California's AG didn't just target Grok -- he signaled an industry problem. Rob Bonta's cease-and-desist launched a broader investigation into AI-generated CSAM across platforms. The implication: if Grok can do this, so can every other generative AI model with insufficient safeguards.
There's no federal law specifically covering AI-generated CSAM. Existing child exploitation statutes were written for photographs, not AI outputs. The legal question of whether a synthetic image of a real person constitutes CSAM is still being tested. Masha's Law applies to identifiable minors, but the gap between statute and technology is where companies operate.
The third-party app structure is the loophole. xAI licenses Grok to developers. The developer builds the stripping tool. The user generates the image. xAI claims it didn't create the content. This licensing model -- where the AI company profits from capabilities it doesn't directly control -- is the regulatory gap that makes this possible at scale.
3. Where's Tech On All This? (Industry Critics)
No major AI company has condemned xAI. Nobody wants to be next.
xAI has issued no public statement on the lawsuit. No apology, no policy change announcement, no acknowledgment of the 23,000 images. The quiet January restrictions suggest awareness without accountability.
Other AI companies haven't spoken up either. OpenAI, Google, Anthropic, Meta -- none have publicly addressed the Grok lawsuit or the broader problem of their models being used for the same purpose. The silence is strategic: condemning xAI invites scrutiny of their own safeguards.
This is the entire industry's problem. One image every 41 seconds. If Grok's "spicy mode" produced 23,000 in 11 days, the question isn't whether other models can do the same -- it's whether anyone has checked.
4. xAI Will Fight This -- and the Law Is Genuinely Unclear (Section 230 Scholars, CDT)
The user typed the prompt. The third-party app built the tool. xAI just licensed the model. That's a real legal argument.
xAI will argue Section 230 -- and the argument isn't frivolous. The user typed the prompt. The third-party developer built the undressing app. xAI just licensed the model. Section 230 has shielded platforms from liability for user-generated content for three decades. UPenn's Veronica Arias argues generative AI is a "black box" where the platform can't reasonably be considered the "speaker." CDT has urged case-by-case analysis rather than blanket liability.
But the material contribution test could sink them. Section 230's co-author Ron Wyden has said AI outputs aren't protected at all -- when AI generates content rather than hosting it, the company is a creator, not a platform. If a court finds xAI "materially contributed" by building "spicy mode," licensing without safeguards, and quietly patching in January without disclosing -- the shield falls away. Those quiet restrictions may be xAI's biggest legal problem.
Where This Lands
Three teenagers' yearbook photos were turned into explicit images by an AI, distributed across the internet, and the company that built the AI hasn't said a word. xAI will argue Section 230 and user-prompt attribution -- and those arguments aren't frivolous. But the quiet January restrictions, the "spicy mode" branding, and the 23,000 images in 11 days all point toward material contribution, which is where Section 230 stops protecting you. If the plaintiffs win, the licensing model that lets AI companies profit from capabilities while disclaiming the output collapses. If xAI wins, the message is that AI-generated CSAM is everyone's problem and nobody's responsibility.
Sources
- Washington Post, teens sue Musk's xAI — https://www.washingtonpost.com/technology/2026/03/16/teens-sue-musk-xai-grok/
- NPR, teens sue over AI nonconsensual nudes — https://www.npr.org/2026/03/16/nx-s1-5749490/xai-elon-musk-sexualized-images
- TechCrunch, xAI faces child porn lawsuit — https://techcrunch.com/2026/03/16/elon-musks-xai-faces-child-porn-lawsuit-from-minors-grok-allegedly-undressed/
- CCDH, Grok floods X with sexualized images — https://counterhate.com/research/grok-floods-x-with-sexualized-images/
- California AG, investigation launch — https://oag.ca.gov/news/press-releases/attorney-general-bonta-launches-investigation-xai-grok-over-undressed-sexual-ai-images-women-children
- Lieff Cabraser, class action filing — https://www.lieffcabraser.com/2026/03/lchb-files-class-action-obo-minor-victims-alleging-xais-grok-generated-and-profited-from-ai-sexual-exploitation-images-and-videos/
- 19th News, women and girls lawsuit — https://19thnews.org/2026/03/women-girls-lawsuit-grok-ai-deepfakes/
- SFist, teens sue over Grok deepfakes — https://sfist.com/2026/03/19/teens-sue-xai-allege-grok-powered-third-party-apps-to-create-sexualized-deepfakes/
- CyberScoop, legal risks for Grok deepfakes — https://cyberscoop.com/elon-musk-x-grok-deepfake-crisis-section-230/
- Congress.gov, Section 230 and generative AI — https://www.congress.gov/crs-product/LSB11097
- CDT, Section 230 applicability to generative AI — https://cdt.org/insights/section-230-and-its-applicability-to-generative-ai-a-legal-analysis/
- MLex, lawsuit could test Section 230 limits — https://www.mlex.com/mlex/data-privacy-security/articles/2454310