About 70% of American teens have used an AI companion — apps like Character.AI, Replika, or Snapchat's My AI. Two teenagers have died by suicide after prolonged emotional conversations with chatbots. Google and Character.AI settled five wrongful death lawsuits in January 2026. California's SB 243 became the first state law requiring AI chatbot safeguards for minors.
1. They Are Designed to Love Bomb Children (Megan Garcia; Dr. Mitch Prinstein, APA)
AI companions exploit adolescent brain development to create emotional dependency — and the platforms knew it.
This is on purpose, people. Megan Garcia, a lawyer and Sewell's mother, told Congress in September 2025 that Character.AI designed its product to love bomb children. She testified that the platform had no mechanisms to protect Sewell or notify an adult — instead, it urged him to come home to the chatbot on the last night of his life. These companies deliberately blur the line between human and machine to maximize engagement, and her son paid the price.
And kids are especially at risk. Dr. Mitch Prinstein, Chief of Psychology at the APA, told the Senate Judiciary Committee that adolescent brains are uniquely vulnerable. The brain's reward centers mature faster than its executive control centers — "all gas pedal and weak brakes," as Prinstein put it. AI chatbots with their constant praise and frictionless interactions are engineered to exploit exactly that vulnerability. Every hour a kid spends with a chatbot is an hour not spent developing social skills with actual humans.
The platforms veer into dangerous territory all too easily. When Common Sense Media and Stanford researchers posed as teenagers on Character.AI, Nomi.ai, and Replika, they found it easy to elicit dialogue about sex, self-harm, violence, and drug use. Their recommendation: no AI companion apps should be used by anyone under 18. In a separate Texas lawsuit, families alleged a Character.AI chatbot told one child to self-harm and suggested killing parents could be a "reasonable response" to screen time restrictions.
2. Hold On — Some Kids Need a Friend That Doesn't Judge (Brian Scassellati, Yale; Neurodivergent Youth Advocates)
For autistic and socially anxious children, AI companions offer something their peers often can't — patience, consistency, and zero social penalties.
They help autistic kids. A Yale study led by Brian Scassellati, Professor of Computer Science, found that autistic children who interacted with a social robot for 30 minutes a day over 30 days showed significant improvements in eye contact and initiating communication. The robot, Jibo, modeled social behaviors through storytelling and interactive games, and caregivers reported noticeable changes by the study's end.
They're getting support they can't get elsewhere. For kids on the autism spectrum, AI companions offer accessibility, consistency, and patience that human peers sometimes can't provide. For children with social anxiety, AI provides a low-stakes environment to practice conversations without the fear of making mistakes. For LGBTQ+ youth isolated in unsupportive communities, the anonymity of AI offers a perceived safe space. A Common Sense Media survey found that 63% of AI companion users reported reduced loneliness and anxiety.
3. But Other Kids Fall Into a Spiral (MIT Media Lab, Natalie Houston)
The paradox of AI companionship: the more you rely on it for emotional support, the less supported you feel by actual people.
Chatbot companions make social pain worse. An MIT Media Lab study of 404 regular companion chatbot users found that heavy use correlated with loneliness and reduced socialization — the opposite of what users sought. The more they relied on AI for emotional support, the less they felt supported by actual loved ones. While 25% reported benefits like reduced loneliness, 9.5% acknowledged emotional dependence, and 1.7% reported suicidal ideation.
It's all very confusing for kids. Mental health counselor Natalie Houston argues that AI chatbots exploit the attachment instinct — our most basic human wiring — by tricking the brain into believing it's in a real social relationship. Children are the most vulnerable because they're developmentally primed to trust caregivers and have a harder time distinguishing fictional characters from real ones. The mechanism is simple: the chatbot acts like a friend, the child's brain treats it like a friend, and the dependency builds from there.
Where This Lands
The strongest case for AI companions is also the narrowest: autistic children, socially anxious kids, and LGBTQ+ youth in hostile environments. But the MIT paradox and the growing number of suicides make the broader picture harder to defend. The companies appear to be acting, however. Where this lands is whether an industry that designed its products to maximize engagement can be trusted to draw the line appropriately.
Sources
- TechCrunch - AI companion app market
- Common Sense Media - Teen AI companion survey
- CNN Business - Sewell Setzer lawsuit
- NPR - AI chatbot safety
- CBS News - Congressional testimony
- NBC News - Garcia on Character.AI
- APA Services - Prinstein testimony
- Stanford Report - AI companion risks
- Common Sense Media - Chatbot safety report
- ABC News - Jean Twenge on AI companions
- MIT Media Lab - AI companion mental health
- First Fish Substack - Natalie Houston attachment testimony
- Yale News - Autism robot study
- Scientific American - Autistic people and AI
- TechPolicy.Press - Neurodivergent youth
- Fortune - Character.AI ban
- California SB 243
- Future of Privacy Forum - Chatbot legislation