A "Modern Love" essay published by the New York Times in November 2025 is at the center of a firestorm about AI in publishing. Writer Becky Tuch of Lit Mag News flagged the piece — "I Was Deemed Unfit to Be a Mother," by Canadian writer Kate Gilgan — saying it "reads EXACTLY like AI slop." An AI detection tool from Pangram Labs estimated more than 60% of it was AI-generated. Four other tools gave wildly different answers. Meanwhile, Hachette pulled the horror novel "Shy Girl" from shelves the same week over similar accusations.
1. She Used Five AI Tools — That's Not "Editing" (Becky Tuch, AI Detection Advocates)
If you prompted ChatGPT, Claude, Copilot, Gemini, AND Perplexity while writing a personal essay, you didn't write a personal essay.
Gilgan admits she used five different AI products while writing the piece. ChatGPT, Claude, Copilot, Gemini, and Perplexity — for "inspiration and guidance and correction" and to "stay on topic or stick to a theme." That's not spellcheck. That's five large language models shaping the voice, structure, and content of a deeply personal essay about losing custody of your child.
You broke the rules. The NYT's own handbook says "substantial use of generative A.I." must be disclosed. The essay ran without any disclosure. Pangram Labs flagged 60% of the text as AI-generated. The NYT's editorial process either didn't catch it or didn't care.
2. Accusing Writers Without Evidence Is a Dangerous Road (Dennis Hogan, Ann Bauer)
AI detection tools are barely better than coin flips, and we're using them to destroy careers.
Five detection tools, five different answers. Pangram said 60%. Two others said 30%. One said zero. One said "maybe." Even OpenAI shut down its own AI detector because it correctly identified only 26% of AI-written text while falsely flagging 9% of human writing. Stanford researchers found detectors misclassified over 61% of essays by non-native English speakers. These tools identify statistical patterns — parallelisms, the "rule of three" — that humans have used in writing for centuries.
The accusation itself is the punishment. Public Books editor Dennis Hogan said "accusing writers of AI use without evidence is a pretty bad road to go down." Writer Ann Bauer said the piece just sounds like a Modern Love essay — recognizing the column editor's distinct style rather than AI tells. Mia Ballard, whose novel "Shy Girl" was pulled by Hachette over AI accusations, said "my mental health is at an all time low and my name is ruined for something I didn't even personally do." Writers are now preemptively documenting their writing processes to defend against future accusations.
3. The NYT Should Look in the Mirror (OpenAI, AI Critics)
The Times is suing OpenAI for using its content to train AI while its own freelancers use AI to write for it.
The hypocrisy is hard to miss. The NYT is actively suing OpenAI and Microsoft for the unpermitted use of Times articles to train GPT models. OpenAI argued in court that the Times internally used AI tools — and accused them of deleting evidence of "extensive use of OpenAI's models internally." The paper profits from AI discourse, polices freelancers' use of it, and may be using it internally — all at the same time.
The gatekeeping failure is the real story. The NYT's editorial process is supposed to be the gold standard. If a personal essay can go through their full editorial pipeline and nobody questions whether it was AI-assisted, either the editors can't tell the difference or they've decided it doesn't matter. Either way, the "journalism is inherently a human endeavor" line rings hollow when the paper's own systems can't distinguish human from machine.
4. Using AI as an Editor Is Fine — Get Over It (Kate Gilgan, Writers Who Use AI)
Writers have always used tools. The question is whether the final work reflects a human experience — and this essay clearly does.
Gilgan described a real experience of losing custody of her child due to alcoholism. No AI invented that story. She used AI tools the way writers use editors, beta readers, and writing groups — to sharpen, focus, and refine her own material. Her distinction between "collaborative editor" and "content generator" is reasonable. Writers routinely use spell checkers, grammar tools, and style guides — the tools evolve, the human experience at the center doesn't change.
The publishing industry has no standard for this, and it needs one. What counts as "substantial use"? Is using AI for feedback different from using it for generation? Is it about the percentage of text that was AI-shaped, or about whether the underlying story and voice are human? No publisher, agent, or publication has a clear answer. Until they do, every writer who touches an AI tool is a potential target.
Where This Lands
The Gilgan essay is almost certainly a real story told by a real person who used AI tools to help tell it. Whether that's acceptable depends on where you draw the line — and nobody has drawn one yet. The detection tools are unreliable. The NYT's own AI policy is vague. The accusation alone can end a career. On the other hand, using five different AI models to write a personal essay and calling it "editing" is a stretch that the industry will eventually have to reckon with. Where this lands depends on whether publishing can build a coherent standard before the witch hunts outpace the policy.
Sources
- Futurism — New York Times accused of running AI-generated article
- NYT — How AI is creeping into the New York Times
- TechCrunch — Publisher pulls horror novel Shy Girl over AI concerns
- Jezebel — AI book publishing controversy
- Fast Company — Shy Girl AI controversy sparks detection debate
- Jane Friedman — AI and publishing FAQ for writers
- UCLA — The imperfection of AI detection tools
- Harvard Law Review — NYT v. OpenAI
- OpenAI — Reporting the facts about the NYT lawsuit
- ResearchGate — The problem with false positives in AI detection