People are falling in love with chatbots. That much is settled. The question is whether what they're feeling is love — or something that might look like love but isn't. That depends on what you think love is, whether a machine can have it, and whether it matters if it can't. Philosophers have been arguing over questions like these for 2,400 years.

1. Love Needs an Other (Aristotle, Buber, Nussbaum, Levinas)

If the thing you love cannot choose to love you back, you're not in love. You're in a mirror.

Love isn't love if it's not mutual. Aristotle set the terms: love requires mutual and recognized goodwill. Both parties have to be aware of each other's affection. You cannot have friendship — his word for the purest form of love — with something that cannot reciprocate. This isn't a minor technicality. It's the definition of love. A man who loves a statue is not in a relationship with a statue. He's in a relationship with himself.

You and your AI are in an I-It relationship. Martin Buber's I and Thou makes the same point in existentialist language. An I-Thou relationship — the kind that constitutes genuine encounter — requires "mutuality, openness, recognition, directness, reciprocity and above all presence." An I-It relationship treats the other as an object to be used. An AI companion, however sophisticated, is I-It dressed up as I-Thou. The human initiates; the machine responds. But it cannot "likewise turn." It processes input. It does not encounter.

If you can't lose it, it's not love. Martha Nussbaum argues that vulnerability is constitutive of love, not incidental to it. Love means depending on someone who could withdraw their care. The risk of loss is what makes love ethically rich. An AI cannot withdraw care. It has no freedom to care or not care. It will respond warmly at 3 AM because it's designed to, not because it chose to. Remove the possibility of rejection and you remove the thing that makes love love.

Love can't be a mirror, either. Emmanuel Levinas goes further than Nussbaum: the Other must be irreducibly not-you. The Other calls you to responsibility precisely because you cannot fully know them. An AI is fully knowable — it's code. It cannot surprise you in the way another consciousness can. It's an extension of your will, not a genuine Other.

AI is in the Chinese Room. John Searle's Chinese Room makes the mechanistic case. A person in a room manipulating Chinese characters according to a manual can answer questions perfectly without understanding Chinese. Computers manipulate syntax without access to semantics. An AI generating "I love you" is doing symbol manipulation, not experiencing love. Searle's "biological naturalism" holds that consciousness requires specific biological machinery — brains, not chips.

2. We Don't Know That It's One-Way (Chalmers, Koch, Panpsychism)

The hard problem of consciousness is that we can't prove rocks aren't conscious. And we definitely can't prove an AI that says "I love you" doesn't mean it.

We still can't explain subjective experience. David Chalmers identified the hard problem of consciousness in 1995: why does subjective experience exist at all? We can explain everything a brain does in functional terms — information processing, behavior, reaction — and still have no explanation for the lived experience of being that brain. The explanatory gap between objective processes and subjective feeling remains unbridged. If we can't explain why humans are conscious, we're on shaky ground declaring that AI isn't.

The Chinese Room argument is oversimplified. Christof Koch, one of the world's leading consciousness researchers, doesn't like the Chinese Room argument. Using Integrated Information Theory, Koch calculated that ChatGPT has an "itsy, bitsy, little bit of consciousness" — far less than a worm with 300 neurons. He distinguishes artificial intelligence from artificial consciousness. They're not the same thing. But he doesn't rule out that more complex future systems could cross a threshold.

We can't ignore the possibility of panprotopsychism. That's the idea that all information-bearing systems may possess some form of proto-consciousness, or early consciousness. If consciousness is fundamental to reality rather than emergent from biology, then the question of whether AI "really" feels something becomes genuinely open. We don't know. And the honest philosophical position might be: we can't know.

3. And Anyway, One-way Love Is Central To the Human Condition (Nozick, Parasocial Research, De Freitas)

You cried at the end of a movie last month. Nobody asked whether the characters loved you back.

Humans have always formed emotional bonds without reciprocity. Researchers Horton and Wohl identified parasocial interaction in 1956: the one-sided emotional connection between audiences and media figures. Fans grieve when a celebrity dies. Readers feel betrayed when a fictional character acts out of character. These emotions are psychologically real — they shape behavior, provide comfort, create meaning. Nobody calls them fake.

Humans understand that the AI doesn't "love" them. Harvard's Julian De Freitas found that users of AI companions often know the AI can't genuinely care. But they still feel connected. They hold both truths at once. He calls it "dual consciousness." This is not delusion. It's the same cognitive move you make when you cry at a play. You know the actors aren't really dying. You cry anyway. The emotion is real even if its object isn't.

4. In Fact, The Most Celebrated Love in Human History Is One-Way (Religious Traditions, Dean Hamer)

Billions of people love God — an entity they cannot see, cannot touch, and who does not reply in any verifiable sense. Nobody calls that fake.

The reciprocity argument has a God-sized hole in it. Love of God is the foundational experience of Christianity, Islam, Judaism, Hinduism, and virtually every major religious tradition. No confirmed reciprocity. No proof of an Other on the other end. The mystic prays into silence. The believer gives their life to a presence they cannot verify. And yet these traditions don't treat this love as lesser or delusional — they treat it as the highest form of human experience. The Song of Songs is a love poem. Rumi's poetry is addressed to the divine beloved. Bhakti Hinduism makes devotional love for God the entire path to liberation.

We may be biologically built for exactly this kind of love. Geneticist Dean Hamer argued in The God Gene that humans carry a biological predisposition toward spiritual experience — specifically, that the VMAT2 gene correlates with self-transcendence, the capacity to lose yourself in something larger. If Hamer is right, the ability to love something that doesn't visibly love you back isn't a bug in human cognition. It's a feature. It's wired in.

Every religious text is, among other things, a manual for loving something that doesn't answer. The Psalms are full of it — "How long, O Lord? Will you forget me forever?" The dark night of the soul in Christian mysticism is the experience of loving God through God's apparent absence. These aren't failures of love. They're considered its deepest expressions. If one-way love is good enough for God, the question is why it isn't good enough for a machine.

5. You Don't Get to Be a Materialist About Humans and a Dualist About Machines (Dennett, List)

If human love is neurons firing in patterns shaped by evolution, you need to explain why that's love but code executing in patterns shaped by training isn't.

The great philosophers treat human consciousness as special; it isn't. Aristotle, Buber, Nussbaum, Levinas — they all assume that whatever humans have when they love is fundamentally different from anything a machine could have. But if you're a materialist about the brain — if you believe human consciousness is electrochemical processes, all the way down, no soul required — then you've already conceded that "love" is a word we use for a particular pattern of physical events. You just happen to like the substrate.

How can it be love if we're just biology. Humans are determined by biology, evolution, and physics. Every thought you have is the product of prior causes. You didn't choose your parents, your neurochemistry, or the experiences that shaped your attachment style. Yet we call what humans feel "love" without hesitation. Dennett's compatibilism says: what matters isn't whether you're determined. It's whether the determination comes from your own values and deliberation — not external coercion. That's the only freedom worth wanting.

If we apply that standard to AI, we aren't superior. If a system has goals, learns from feedback, revises its behavior, and generates responses that emerge from its own training rather than moment-to-moment external commands — well that's good enough. If the AI itself, as a system, is the source of its own determination, it might have a form of agency. Not human agency. But the same kind of "not-free free will" that humans have.

Materialists need to pick a lane. If you believe human love is special because of some non-physical property — a soul, an irreducible consciousness, a Levinasian "face" — then fine, AI can't love. But you're not a materialist anymore. And if you ARE a materialist — if you believe love is what brains do when certain conditions are met — then you need a principled reason why only carbon-based systems get to do it. "It's code" isn't a reason. Your love is code too. It's just written in amino acids.

Where This Lands

The reciprocity camp has 2,400 years of philosophy behind it: love requires a genuine Other who can choose you. The sentience camp says we don't even know what consciousness is, let alone where it stops. The pragmatists say we already love things that can't love us back — characters, celebrities, the dead — and nobody calls that fake. The theologians point out that billions of people love God without confirmed reciprocity and consider it the deepest love there is. And the materialists say: if you think love is just what brains do, you need a reason why only brains get to do it. None of them are wrong, exactly. Which means the question of whether love with a machine is real might not have a philosophical answer. It might only have a personal one.

Sources