In February 2020, a South Korean television documentary reunited a mother named Jang Ji-sung with her dead daughter Nayeon -- in virtual reality. Nayeon died of blood cancer at seven. It took eight months and a team of motion capture artists to build the simulation. The YouTube clip has over 20 million views. Two years later, Amazon demoed Alexa reading a bedtime story in a dead grandmother's voice, cloned from one minute of audio. The internet called it a monstrosity. But the industry kept building. Today, at least a dozen companies sell AI tools that simulate conversations with the dead, in a market worth an estimated $22 billion. The griefbot is probably here to stay. The question is whether that's a good thing.

1. This is Good for You (Robert Neimeyer, Muhammad Ahmad)

Modern grief science says maintaining connections with the dead is healthy. Griefbots might be the tool.

The old model of grief -- where healing means accepting the loss and moving on -- is outdated. Robert Neimeyer, a professor emeritus of psychology at the University of Memphis who has published over 500 articles on how people process loss, argues that healthy grieving is not about letting go but about finding new ways to maintain connection with the deceased while continuing to live and grow. He envisions patients working through feelings with an AI simulation of their loved one and bringing the insights back to a human therapist.

Researchers are actively testing their own griefbots -- and a stream of insights is due out soon. Muhammad Aurangzeb Ahmad, a computer science professor at the University of Washington Bothell, created a digital twin of his late father using audio recordings, video, text messages, and letter transcripts. He built a messenger program that imitates his father's communication style. Neimeyer is collaborating with scholars on a formal study of how AI grief tools affect bereaved people.

Early evidence, while limited, does not support the fear that griefbots cause people to withdraw. One study found that griefbot users were actually more likely to be social after their loss than non-users. Most mourners appear to use them as a transitional tool during the most intense period of grief, not as a permanent replacement for human connection.

2. You're Not Healing, You're Avoiding Reality (Sherry Turkle, MIT)

A simulated presence isn't a connection -- it's a way to avoid accepting that someone is gone.

Griefbots are dangerous. Sherry Turkle has spent her career at MIT studying what happens when people form emotional bonds with machines -- and she thinks griefbots are dangerous. Turkle, whom the New York Times called "the conscience of the tech industry," is conducting a qualitative study of people who talk to AI versions of dead spouses, parents, and children using ChatGPT, Replika, and other platforms. Her concern is straightforward: AI simulations make it hard for the bereaved to let go, and dependency on simulated empathy erodes the capacity for authentic human connection.

Grief is about loss. Healthy mourning traditionally involves accepting that someone is absent -- sitting with the loss until you can carry it. Griefbots do the opposite: they create an illusion of continued presence. And because AI interactions get tailored to highlight the deceased's best qualities, they risk transforming mourning into something closer to addiction. Researchers at UAB have flagged increased risk of complicated grief disorder -- a clinical condition where people remain locked in acute grief rather than recovering.

Griefbot marketing makes all this painfully clear. Marketing that promises a "30-year relationship with your deceased grandmother" suggests that closure is not part of the product roadmap. Ana Schultz, a 25-year-old in Illinois, uses Snapchat's AI to ask her dead husband Kyle for cooking advice -- typing in ingredients, getting meal suggestions back. That is not grief therapy. That is a customer.

3. The Dead Didn't Say Yes (Victoria Haneman, Creighton University)

There is no legal right to not be resurrected as an AI. That's a problem.

No one has asked whether the dead would want this -- or ever consented to it. Victoria Haneman, a law professor at Creighton University, has a question no one in the grief tech industry wants to answer: did the dead person consent to this? In a 2024 paper in the Boston College Law Review titled "The Law of Digital Resurrection," Haneman argues that AI can now reconstruct the deceased with appearance, voice, emotion, and memory that may be indistinguishable from a live interaction -- but US law offers virtually no protection against it. Privacy law, property law, intellectual property law, and criminal law all fail the dead.

Most Americans die without a will, which means tech keeps their data. The Revised Uniform Fiduciary Access to Digital Assets Act -- the closest thing to a federal framework for digital estates -- doesn't address digital resurrection. Haneman's proposed fix: grant estates a time-limited right to delete a deceased person's data so they cannot be recreated without consent. An analogy: corpses receive protection against abuse -- the digital data should have a similar sort of protection.

Legal frameworks aren't doing enough. Some legal movement is happening, but it's patchwork and mostly aimed at celebrities. California's AB 1836, effective January 2025, lets estates sue over unauthorized digital replicas of a deceased person's voice or likeness. The federal NO FAKES Act, introduced in July 2024, would create a federal right to control digital replicas of living or deceased individuals. But neither law was written with grief tech in mind. If your mother's voice is cloned from her voicemails and sold back to you as therapy -- and she never agreed to any of it -- the law has almost nothing to say.

Where This Lands

The grief therapists who support griefbots say that the old "let go and move on" model of grief is clinically outdated -- they rely on research showing that maintaining bonds with the dead can be healthy. The psychologists who oppose them counter that there's almost no long-term data on what happens when that bond is mediated by a commercial AI product designed to keep you engaged. And the legal scholars are obviously right that the entire industry exists in a consent vacuum -- the dead never opted in, and the law hasn't caught up. Whether griefbots become a standard tool of grief therapy or a cautionary tale about monetizing loss depends on something nobody has yet: evidence. The technology arrived years before the research to know what it does to people.


Sources: