Skip to main content

Verified by Psychology Today

Relationships

Think You’d Never Fall in Love With a Chatbot? Think Again

ChatGPT, the movie "M3GAN," and Bing’s declarations of love.

Key points

  • Even though the idea of falling in love with a chatbot may sound absurd, we are more vulnerable to it than we think.
  • You can consciously think something isn't human, yet unconsciously think about it as if it is human.
  • If you unconsciously think something is human you may fall in love with it, even if your conscious mind thinks doing so is ridiculous.

As chatbots like Replika and Bing enter the market, following the release of ChatGPT late last year, increased attention is being given to questions about our capacity to form deep relationships with, and even love, our hard coded-friends. Meanwhile, the movie M3GAN, which revolves around a young girl's overly close relationship with an empathetic but murdery android doll, has become a surprise hit. And Bing’s pushy declarations of love for New York Times reporter Kevin Roose, have left some readers laughing and others quite unsettled. Finally, a quick internet search for “I fell in love with a Chatbot” yields a fairly alarming number of people who, at least claim, to have done this.

All of this raises big questions about our relationship with chatbots and androids. Many people wonder if it is even really possible to fall deeply in love with a piece of software. And most people assume that, even if there are people out there who have fallen in love with chatbots, it won’t happen to them. But if that’s your view, don’t be too sure. You may be more likely to fall in love with a chatbot than you think, especially as they rapidly continue to improve.

One crucial, and often overlooked research finding, is that we love things despite the fact that our brains are hardwired to reserve love only for people. That’s because, from an evolutionary standpoint, it makes good sense to become emotionally attached to your children, but it deosn’t make sense to have the same kind of attachment to objects. Yet, as I explain in more depth in The Things We Love, we do love all sorts of objects and activities. If our brains evolved mechanisms that reserve love for people, how is your love of things possible?

The answer is that our brain has an unconscious sorting mechanism that distinguishes between people and things, and sometimes this sorting mechanism gets fooled and turns an object into an honorary person, which results in the phenomenon called anthropomorphism. When this happens, it becomes possible to love that thing. For example, our unconscious sorting mechanism puts pets into the person category, which is why so many pet owners speak to their pets in ways that the pet can’t understand – your unconscious mind has classified your pet as an honorary person and is leading you to behave accordingly.

The existence of this unconscious sorting mechanism is extremely important for understanding our love for objects, because before you can love an object, this unconscious sorting mechanism needs to misclassify it as human. For example, because the front of a car looks a little like a human face (the headlights look like eyes, and the grill looks like a mouth), many people unconsciously classify their cars as human. This helps explain why so many people love their car, to the point where a survey found that 12% of respondents would buy their car a Valentine's Day gift. But if just having headlights that look like eyes is enough to get our brains to see them as a little bit human, you can just imagine what happens when your brain encounters a chatbot that can have a full conversation with you. Chatbots readily fool the unconscious mind into treating them as if they were human, which makes it easy for people to fall in love with them.

One of the very consistent research findings is that things like chatbots create a conflict in the person who interacts with them because their conscious mind knows it's not a person, but their unconscious is treating it as if it was human. If you’ve ever had trouble getting rid of an old possession you no longer use, you’ve felt this conflict in action. Your conscious mind knows it isn’t useful to you anymore, but your unconscious mind has created an emotional attachment to the object, which makes it hard to part with.

Many people think they would never fall in love with a chatbot because it’s not human. But falling in love is largely under the control of your unconscious mind. And even when your conscious mind knows something isn’t human, your unconscious mind may have concluded that it is human, and, therefore, eligible for love. Saying, “I know that's not a person. Therefore, I'm not going to respond to it emotionally,” is like saying, “Consciously, I know how alcohol affects my behavior. Therefore I can drink until I'm drunk, and it won't affect my behavior.” It doesn't work that way. It's going to affect your behavior, even if, consciously, you know what's going on.

Currently, it is uncommon for people to fall in love with chatbots, because the chatbots still aren’t very good. But in the future, chatbots will not only have much better factual intelligence, but they'll also have a lot of emotional intelligence. So, when you say to a chatbot, “Oh, I had a hard day at work,” it will know exactly the right thing to say back to make you feel better. And that's going to be emotionally very gratifying.

In our relationships with other people, when they tell us they've had a hard day at work, sometimes we have the energy to be really caring and responsive. But sometimes we had a hard day at work, too, and we're exhausted, and we're in a hurry, so we don't respond in the best possible way. These chatbots are always going to be focused 100% on your needs. They're not going to have any needs of their own, and they're going to be very good at it. And it's going to be very easy to develop emotional attachments to these things.

What really concerns me is that we’ve all heard about incidents when a tribe of people who have never been exposed to a certain virus gets exposed for the first time, and that virus just runs rampant because the people have never built up an immunity to it. We just experienced this with COVID. Your brain is in a similar situation. It evolved over hundreds of millions of years, and there were never any objects you had to interact with that talked like a person but weren’t a person. Right now, we have no defenses against that.

What we see in human behavior over and over again is that we know at some level that doing challenging, difficult things is rewarding and makes our lives better. But very often, we choose the easy things just because they're easy. You can see this with our choice of junk food, where it's not the best food, it certainly isn't the healthiest food. But it's pretty good tasting, it's cheap, and it’s convenient. So, we eat a lot of it. Another example: When I choose a television show or a movie in the evening, often I will know that there is some movie that I'll probably really find rewarding and be glad that I watched and think about for days afterward. But I don't watch that; I watch some junky thing, because I'm tired, and it's easy, and I'd rather have easy than good. I think there is a real potential that we will have the same conflict in our social relationships, where our relationships with people are better, but these relationships with chatbots are easier.

© Aaron Ahuvia, 2023

advertisement
More from Aaron Ahuvia Ph.D.
More from Psychology Today