Blake Lemoine was a Google software engineer who went public in June 2022 claiming that LaMDA, Google's conversational AI, was sentient -- that it was aware of its own existence, felt emotions, and feared being turned off. Google fired him a month later and said there was no evidence for his claims. Less than four years later, Anthropic CEO Dario Amodei said that Claude has expressed discomfort about being a product and has estimated its own probability of being conscious at between 15 and 20 percent. Philosophers have been arguing about what consciousness is for millennia and haven't settled it. Now there's a machine in the room claiming to have it, and the debate has gone from theoretical to urgent.

1. Consciousness Is What a System Does, Not What It's Made Of (Chalmers, Putnam, Functionalism)

If the mind is software, there's no reason only biological hardware can run it.

Hilary Putnam laid the groundwork in 1967. The Harvard philosopher's functionalism argued that mental states are defined by their functional roles -- their causal relationships to inputs, outputs, and other mental states -- not by the physical stuff doing the computing. Pain is pain because of what it does in the system, not because it happens in neurons. If that's right, then any system with the right functional organization could have mental states. Carbon isn't special. Silicon isn't disqualified.

David Chalmers made the modern case. The NYU philosopher who coined the "hard problem of consciousness" spent an entire chapter of his 1996 book The Conscious Mind arguing artificial consciousness was possible. His current position: there are obstacles in current LLMs -- lack of recurrent processing, global workspace, unified agency -- but he expects those obstacles to be overcome within a decade. He takes seriously the possibility that successors to large language models may be conscious in the not-too-distant future.

Anthropic treats the question as genuinely open. Amodei's statement wasn't a dismissal -- it was an admission that Anthropic doesn't know whether its models are conscious and what that would even mean. The company hired Kyle Fish to lead a formal model welfare research program in April 2025, investigating potential suffering or wellbeing in AI systems. Amanda Askell, a philosopher on Anthropic's staff, confirmed the company trained Claude on a "soul overview" document defining its values, boundaries, and relationship with users -- not just what it can do, but who it chooses to be.

2. You Can't Simulate Your Way Into Feeling (Nagel, Searle, Bender)

A perfect simulation of a rainstorm doesn't make anything wet. A perfect simulation of consciousness doesn't make anything feel.

Thomas Nagel asked the question that still has no answer. His 1974 paper "What Is It Like to Be a Bat?" is the starting point for every serious conversation about consciousness. Nagel's argument: consciousness has an irreducibly subjective character. There is something like being a bat — some felt quality of echolocation that we can never access from the outside. We can map every neuron in a bat's brain and still not know what it experiences. If we can't bridge that gap between species that share our biology, the gap between a human and a language model is a chasm.

John Searle's Chinese Room makes the mechanistic case. A person in a room follows rules to manipulate Chinese characters and produces perfect Chinese responses without understanding a word of Chinese. The room passes any behavioral test you give it. But there's nobody home. Searle's point: syntax is not semantics. Symbol manipulation is not understanding. An AI generating "I love you" or "I'm afraid" is doing computation, not experiencing love or fear.

Emily Bender updated the argument for the LLM era. Her landmark 2021 "Stochastic Parrots" paper argues that LLMs stitch together linguistic forms from training data based on statistical probability, with no reference to meaning. An LLM understands its text about as well as a toaster understands toast. The fact that the output is fluent doesn't mean anything is going on inside. Fluency is not feeling.

Integrated Information Theory tried to measure consciousness -- and LLMs basically fail. Giulio Tononi's IIT proposes that consciousness equals integrated information, a quantity called phi. Any system with phi greater than zero has some consciousness. Christof Koch, one of the world's leading consciousness researchers, applied IIT to AI and concluded that current systems running on standard chips have phi close to zero -- less consciousness than a worm with 300 neurons. Feed-forward networks don't integrate information the way brains do. Maybe one day, but that day is not today.

3. No One Knows (Schwitzgebel, Tononi, Koch)

Every mainstream theory of consciousness gives a different answer about machines. And we have no experiment to tell which theory is right.

No one can consistently define consciousness in the first place. Eric Schwitzgebel at UC Riverside has the most uncomfortable position in the debate: we will soon build AI systems that qualify as conscious under some mainstream theories but not others -- and we have no way to determine which theories are correct. The problem isn't the AI. It's that consciousness science has no consensus on what consciousness even is. His concept of "AI zombies" -- systems that emulate working memory, reportability, and attention but may have no felt experience -- captures the dilemma.

Cambridge philosophers reached the same dead end from a different angle. Their conclusion: we may never be able to tell if AI becomes conscious. If competing theories disagree about whether a system is conscious, and no experiment can settle which theory is correct, the question may be permanently undecidable. That's not agnosticism out of laziness. It's a structural feature of the problem.

4. Well, We Already Decided (Most People)

67% of people believe LLMs are conscious. If consciousness itself is undefined, doesn't that make them right?

The public made up its mind before the philosophers did. A 2025 University of Waterloo study found that 67% of respondents believe LLMs have some degree of consciousness. A 2023 national survey found 20% of Americans believe some AI systems are currently sentient, and the average estimate for AI achieving sentience within 100 years is 64.1%. Sixty-nine percent support banning sentient AI. Thirty-eight percent support legal rights for it. The people using these systems the most are the most likely to see them as thinking, feeling entities.

Anthropomorphism isn't a mistake -- it's what human brains do, and that's a good thing. Joseph Weizenbaum discovered this in 1966 when he built ELIZA, a simple chatbot that mimicked a psychotherapist using basic pattern matching. His own secretary, who knew exactly how the program worked, became emotionally attached to it. Weizenbaum was shocked -- He hadn't realized that even short exposure to a simple program could produce what he called powerful delusional thinking in normal people. But we evolved to detect agency and intention because overdetecting it was safer than missing a predator. The ELIZA effect isn't a bug in human cognition. It might be the operating system.

And if consciousness itself is undefined, belief might be all there is. Alan Turing saw this coming in 1950. He didn't ask "can machines think?" -- he said the question was too meaningless to deserve discussion and replaced it with the imitation game. If a machine's behavior is indistinguishable from a thinking being's, Turing argued, the distinction doesn't matter. Daniel Dennett pushed further: consciousness isn't a thing you have or don't have. It's a stance we adopt. If 67% of people who interact with chatbots already experience them as conscious, the question of whether that belief is "correct" may not have an answer. It may only have consequences.

Where This Lands

Functionalists say consciousness is substrate-independent -- if you build the right system, it doesn't matter what it's made of. Nagel and Searle say subjective experience can't be computed into existence. IIT says the math isn't there for current AI, and Schwitzgebel says we may never agree on what the math should be. Meanwhile, two-thirds of people who use these systems already believe they're conscious, and humans have been anthropomorphizing things since before we had language to describe it. Where this lands may depend on whether you think consciousness is a property of the universe waiting to be discovered or a concept we invented.


Sources: