Skip to main content

Verified by Psychology Today

Cognition

Why Is ChatGPT So Smart and So Stupid?

ChatGPT and large language models mix brilliant insights with idiotic mistakes.

Key points

  • ChatGPT and other large language models can be useful in uncovering and synthesizing information.
  • The limitations of ChatGPT highlight how current AI is limited with respect to explanation, accuracy goals, and ethical values.

Along with millions of users, I have been experimenting with ChatGPT, which is OpenAI’s public version of its large language model GPT-3. In answers to hard questions, ChatGPT sometimes delivers insightful answers that would be a credit to an excellent Ph.D. student. Other times, however, it makes idiotic and obnoxious mistakes. I give reasons why ChatGPT is sometimes so smart, contrasting reasons why it is sometimes so stupid, and lessons to be learned from it about the differences between human and artificial intelligence.

Why is ChatGPT so smart?

I asked ChatGPT: “How is Plato's Cave like Zhuangzi's dream of being a butterfly?” To my amazement, it quickly generated a one-page essay that accurately describes Plato’s allegory about people trapped in a cave where they only see shadows on a wall, and recounts the story about the Chinese philosopher who wondered if he was a man dreaming he was a butterfly or a butterfly dreaming he was a man. Even more amazingly, the essay explained how both stories concern perception, reality, and enlightenment. I have similarly been impressed by other answers to questions both deep and mundane. How does ChatGPT pull this off?

  1. ChatGPT and other large language models have access to a vast amount of verbal information available on the Internet, including Wikipedia, countless Web sites, and electronic books.
  2. These models use powerful machine learning algorithms to train neural networks that synthesize this information.
  3. These neural networks are capable of predicting what utterances are most probable given users’ questions and previous interactions.
  4. ChatGPT has received further training through reinforcement learning that improves its ability to generate articulate and useful answers.

Then why is ChatGPT so stupid?

My son Adam asked ChatGPT “Who is Paul Thagard?” Its response was fairly accurate about some of my publications, but it got my birthday and birthplace wrong even though these are available on my Wikipedia page. Laughably, it completely made up the misinformation that I am a musician who plays guitar for a band called Rattlesnake Choir! Many other users have noticed that ChatGPT makes idiotic mistakes, which result from the following flaws:

  1. ChatGPT merely predicts the next thing it could say without any causal model of how the world actually works. It has sophisticated syntax with no semantic connection with reality, making it incapable of explaining why things happen.
  2. Unlike responsible human communicators, ChatGPT has no accuracy goals and can easily be tricked into generating vast amounts of misinformation.
  3. ChatGPT is also lacking in ethical principles about telling the truth, avoiding harm to people, and treating people equally.
  4. ChatGPT does not disclose its sources. Evaluating information requires examining the reliability and motives of its sources, but ChatGPT merely gives oracular pronouncements. Shockingly, it sometimes makes up references.

ChatGPT problems give insights into human and artificial intelligence.

ChatGPT and other large language models are useful tools with many applications, but their problems provide valuable insights into the limits of current AI.

  1. Intelligence requires explanation based on a causal understanding of the world and not just prediction. Human intelligence is not just predictive processing because it also excels at pattern recognition, explanation, evaluation, selective memory, and communication.
  2. ChatGPT has the potential to increase the amount of real information available to people, but it is also a powerful tool for generating and spreading misinformation.
  3. Intelligent communication depends on values and ethical principles about accuracy and benefitting people rather than harming them.
  4. The so-called alignment problem, of producing artificial intelligence systems whose values align with those of people, is much harder than generally assumed. Human values are emotional attitudes rather than mere preferences. ChatGPT and other programs have no bodily contributions to emotions, which motivate values.

So should people stop using ChatGPT? No, it can be used with caution, like a search engine that may take you to bogus Web sites full of misinformation. ChatGPT forces the user to be extra-vigilant because you cannot know where the information originates since it is a statistical amalgam of what is available in Web sites, electronic books, and similar sources. Instead of buyer beware, you should accentuate the warning: browser beware.

References

Christian, B. (2020). The alignment problem: Machine learning and human values. New York: WW Norton & Company.

Thagard, P. (2021). Bots and beasts: What makes machines, animals, and people smart? Cambridge, MA: MIT Press.

advertisement
More from Paul Thagard Ph.D.
More from Psychology Today