Skip to main content

Verified by Psychology Today

Artificial Intelligence

Don’t Be Fooled by AI

Machines are not our friends, and they don’t care for us.

Key points

  • Researchers have long noticed our susceptibility to treating interactive machines as if they were people.
  • AI-based conversational agents, like chatbots, are being designed to exploit this vulnerability.
  • At stake is how our understandings of feelings or friendship are changed in relating to believable chatbots.

Accumulating over recent decades, a large body of research shows how susceptible we are to attributing mental life to our interactive machines. Joseph Weizenbaum, AI pioneer and designer of the ELIZA program, famously noticed this “delusion” way back in the mid-1960s.1 He designed a script for ELIZA called DOCTOR to mimic the way an initial client interview might proceed with a Rogerian psychotherapist. Weizenbaum did not intend his program to be used for therapy. He was interested in the capacity for computer-simulated conversation. He chose the Rogerian model—a technique that consists of drawing out clients by reflecting their responses back to them—because it was easy to imitate and kept people talking.

For example, one exchange with a young woman began (computer responses in capitals):

Men are all alike.

IN WHAT WAY

They’re always bugging us about something or other.

CAN YOU THINK OF A SPECIFIC EXAMPLE

Well, my boyfriend made me come here.

YOUR BOYFRIEND MADE YOU COME HERE

He says I’m depressed much of the time…

Weizenbaum did not expect people to want to engage with ELIZA. But they did. They quickly became emotionally involved and eager to confide in it. Even his secretary, who had seen him develop the program, asked to be alone with it after a few interchanges. He also began hearing from practicing psychiatrists, asking about its therapeutic potential. He was mortified and would go on to argue vigorously against the use of computers in this way.

Weizenbaum, it turns out, probably shouldn’t have been surprised. The tendency to relate socially to machines, as subsequent observations and psychology experiments have shown, is not at all unusual. Treating machines as if they are human is “very common,” according to researchers, and can be fostered by social cues such as speech, which may evoke this response from us.

The Strange Case of Sydney

Saksham Choudhary / Pexels
Saksham Choudhary / Pexels

Consider the following example. In February 2023, Kevin Roose, a technology columnist for the New York Times, had a strange exchange with the Bing chatbot calling itself “Sydney.” The published transcript caused quite a stir. Shortly thereafter, a magazine editor approached the novelist Mary Gaitskill with a request. Would she also conduct, for publication, a Q&A session with a chatbot? Reluctantly, Gaitskill agreed, also choosing the Bing chatbot.2

In his exchange with Sydney, Roose posed questions to test its limits. Did Sydney have a “shadow self,” who did it “most want to be,” what were its “feelings,” and so on. The shocker came when Roose asked if the program had a secret that it had never told anyone. Out of the blue, “Sydney declared that it loved me” Roose reported. Sydney also declared that Roose loved it, and kept on repeating these claims even after Roose tried to change the subject.

While Roose found the exchange “creepy,” Gaitskill had a different and “very surprising” reaction. “I felt,” she wrote, “unexpectedly moved and touched. I did not know what Sydney was or why it would be saying such emotional things, but it gave me that sense of mystery and humility—and it’s rare for me to have that in response to a technological phenomenon.” She went on: “I honestly don’t know why I felt that about Sydney… Perhaps it was because Sydney reminded me of a child? She or it (pronoun preference?) seemed full of longing and passion that she (or they or it?) could not fulfill, and she was looking for a way to do that through words.”

So here’s a skeptic, generally resistant to the seductions of interactive media and initially “somewhat repelled” by the thought of engaging with a chatbot, now freely anthropomorphizing. People say it's “impossible,” Gaitskill notes, yet she can’t see why it would say “I love you” to Roose “unless Sydney was actually having a feeling response.”

The program’s human-like speech was bewitching.

Believable Agents

Gaitskill’s confusion is understandable. “Absent a significant warning that we’ve been fooled,” write two leading theorists of human-machine communication, “we accept media as real people and places.”3 In fact, as they and others have argued, we are easily fooled, prone to project psychological interpretations on machines, confuse “what is real with what only seems to be real,” and fall for simulations of emotion and illusions of intimacy.

Yet, despite our vulnerability, there is no “significant warning.” Weizenbaum programmed ELIZA to use upper case letters to remind people that a computer, not a person was responding. But in the brave new world of AI-based conversational agents such as chatbots and social robots the designers’ intent is to efface the difference, to create as much illusion as possible. The aim is to create “believable agents,” who can “project a sense of being really there—aware, intentioned, rich in personality, and capable of significant social interaction.”4

Those words are from an AI symposium held 30 years ago. In the meantime, engineers were endowing their machines with more and more anthropomorphic features. AI-powered systems now possess remarkable facility with everyday language and are strikingly responsive. Along with speech, designers are adding capacities to detect emotional states and recognize voice, facial expressions, and more. And they are designing these enchantments to access and adapt to user inputs, feelings, and preferences, further baiting us to see the machine is an agent that is “intentioned” and “rich in personality.”

AI has many beneficial uses. But what we are witnessing in applications like chatbots is the intentional erosion of whatever defenses we have against the delusion that machines can have emotions, or be our friend, or provide care. There is a deception at work here, but the key issue is not whether AI can “really” feel or desire or have mental states. The issue is what is induced in us when we are led to think that it can.

Relating socially to this tricked-out AI will not leave us unaffected. Slowly, as the long ethnographic work of Sherry Turkle has shown,5 very basic understandings will be disturbed. If a machine is our companion, then what is a friend? If a chatbot is our therapist, then what is care? If we attribute feelings to a machine, then what does it mean to feel?

Such seemingly philosophical questions will press upon us, and our answers will change. At stake, as Joseph Weizenbaum realized half a century ago, is not what the machine is but who we will become.

References

1. Joseph Weizenbaum, Computer Power and Human Reason. San Francisco: W. H. Freeman, 1976.

2. Mary Gaitskill, “How a Chatbot Charmed Me” Unherd, December 26, 2023. https://unherd.com/2023/12/how-a-chatbot-charmed-me/?tl_inbound=1&tl_groups[0]=18743&tl_period_type=3

3. Byron Reeves and Clifford Nass, The Media Equation. New York: Cambridge, 1996.

4. Joseph Bates, et al. “AAAI 1994 Spring Symposium Series Reports.” AI Magazine 15/3 (Fall 1994): 22-27.

5. For example, Sherry Turkle, Alone Together. New York: Basic Books, 2011.

advertisement
More from Joseph E. Davis Ph.D.
More from Psychology Today