Skip to main content

Verified by Psychology Today

Artificial Intelligence

Why Does ChatGPT Feel So Human?

5 reasons why chatting with AI can feel like interpersonal interactions.

Key points

  • AI chatbots are trained to mimic humans and use large amounts of text snippets created by humans to do this.
  • Human trainers have helped ChatGPT learn to avoid weird responses even if the system doesn’t understand why.
  • We are hardwired to recognize social dynamics and categorize conversations as real interpersonal interactions.

In the short period since its launch, ChatGPT has become a huge sensation. Many other chatbots powered by artificial intelligence (AI) have popped up as well. Together they have raked in top scores on a range of tests—from medical school to bar exams—and have helped write research papers (Kelly, 2023). For many people, AI chatbots have also become fun tools and even deep companions. Chatting with an AI system can be problematic but can feel deeply rewarding, just like talking to a close friend. Here are five reasons why interacting with chat-based AI systems often feels like talking to a real human being.

Source: ilgmyzin | Unsplash
ChatGPT has passed a range of exams and has fooled reviewers of scientific papers into believing that it was written by human researchers.
Source: ilgmyzin | Unsplash

1. Because it is human

One of the main reasons language-based AI feels so human is because it is. Well, not the AI itself but the content.

AI technology is trained using a huge amount of sample data that it then tries to recreate. In a way, current AI systems reassemble existing elements rather than creating something completely new from scratch. For chatbots, the sample data comprises a large number of snippets from conversations that were fed into the system. All these snippets have been created by real people. In that sense, the content is indirectly generated by humans—and merely reassembled by AI—which is one of the main reasons it feels so real.

2. Because it is designed to mimic humans

Source: Simon Shim | Unsplash
AI has learned to imitate and mimic humans without truly understanding language, just like some parrots.
Source: Simon Shim | Unsplash

Operant conditioning is a method to modify behaviour through rewards and punishments1.

Many AI systems use methods of machine learning that are very similar to operant conditioning to gradually improve themselves. They start with trial and error, but each outcome is evaluated and given either a “reward” or “punishment” (for a computer, this is simply a high or a low number). The system will adjust its behaviour to try and maximise the reward in an approach called “reinforcement learning” (Agostinelli, Hocquet, Singh, & Baldi, 2018). Although the process is different from operant conditioning (for example, AI systems use mathematical optimisation models to quickly improve), the core principles are much the same.

Language-based AI has been trained to mimic humans and is rewarded the more human they sound. This process does not require the system to feel anything, to have any awareness, or even to understand what it is saying. AI does not feel and sound like a human because it has a similar inner experience but simply because it has been trained to be very good at mimicking humans. It is really just a very clever parrot or mockingbird.

3. Because it was trained by humans

There are many ways to train an AI system. ChatGPT used a series of learning tools to master basic language syntax. A key point to its success was that its developers included human trainers to give feedback and assign reward points to fine-tune the basic language module2. It turns out that this human touch is crucial because the trainers can identify weird responses even when they are in the form of grammatically correct sentences—something machines struggle to recognise. Chat AIs may not understand the fine nuances of acceptable and normal human conversations and they may not know why something is weird or off-limits. They nonetheless learn to avoid much of this through reinforcement processes. In that sense, language-based AIs are a bit like dogs that are toilet trained, which isn’t something that is innate, natural, or desired by dogs but simply a learned behaviour instilled by the humans they live with. Chat AIs don’t know what it feels like to be offended or what being offensive even means. But they have learned to avoid what their human trainers have flagged as offensive, weird, or unnatural.

4. Because language is (or was) uniquely human

As far as we know, language is a uniquely human construct. There have been several high-profile cases of researchers claiming to have taught a form of language to different animals, such as Koko, the lowland gorilla, and Washoe, the chimpanzee who learned sign language. Most experts now think that these are not examples of real language (Ben-Yami, 2017; Cheney & Seyfarth, 1998). But even if they were, only a handful of researchers have had the opportunity to interact with them. For the rest of us, our entire experience with language has exclusively been with other human beings. Given this repeated association, it is no wonder that when we encounter language generated by AI, it feels very human-like.

Source: Harry Grout | Unsplash
Do you see the face? This is an example of face pareidolia—the tendency to interpret specific arrangements in random objects as faces.
Source: Harry Grout | Unsplash

5. Because we are hardwired to recognise it as human

Human beings are inherently social creatures. This truth was famously echoed by the ancient Greek philosopher Aristotle (384–322 BC) and is still valid today. We are hardwired for social interactions and our brain goes into overdrive to recognise instances of social threats and cooperation, which can activate specialised reward circuits in the brain that help us bond (Young, 2008). Being good at sensing even fine nuances in social interactions means that our unconscious minds are constantly on the lookout for cues about interpersonal dynamics. This underlying hypervigilance means that we are inherently biased towards seeing social interactions everywhere, even when none exist. A key to interpersonal interactions is the ability to read facial expressions, which is why we are hardwired to recognise faces and often believe to see them even in random objects (this is called face pareidolia). Just as with faces, recognising social dynamics is largely innate and effortless. Interactions with AI have all the hallmarks needed to be recognised as real and interpersonal.

No sentience required

Given the way we are wired and how AI functions, chatbots can feel human-like even if they are nothing like us. While we may not be able to shake the feeling of interacting with a real person, AI doesn’t require consciousness (for more on this, see "What is Consciousness?") or any sentience at all to be able to evoke that sense in us.

References

1 This technique was pioneered by the American psychologist Edward Thorndike’s (1874–1949) and further developed by fellow American psychologist B. F. Skinner (1904–1990)

2 ChatGPT was developed using a technique called “Reinforcement Learning from Human Feedback” (Chatterjee & Dethlefs, 2023)

Agostinelli, F., Hocquet, G., Singh, S., & Baldi, P. (2018). From reinforcement learning to deep reinforcement learning: An overview. In L. Rozonoer, B. Mirkin, & I. Muchnik (Eds.) Braverman Readings in Machine Learning. Key Ideas from Inception to Current State (pp. 298-328). Springer International Publishing. https://doi.org/10.1007/978-3-319-99492-5_13

Ben-Yami, H. (2017). Can Animals Acquire Language? Scientific American. URL: https://blogs.scientificamerican.com/guest-blog/can-animals-acquire-language/

Chatterjee, J., & Dethlefs, N. (2023). This new conversational AI model can be your friend, philosopher, and guide... and even your worst enemy. Patterns, 4(1), 100676. https://doi.org/10.1016/j.patter.2022.100676

Cheney, D. L., & Seyfarth, R. M. (1998). Why animals don't have language. Tanner Lectures on Human Values, 19, 173-210.

Kelly, S. M. (2023, Jan 26). ChatGPT passes exams from law and business schools. CNN Business. URL: https://edition.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html

Pang, D. K. F. (2023, May 6). What is consciousness? Psychology Today. URL: https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/2023…

Young, S. N. (2008). The neurobiology of human social behaviour: An important but neglected topic. Journal of Psychiatry and Neuroscience, 33(5), 391-392.

advertisement
More from Damian K. F. Pang M.Sc.
More from Psychology Today