Skip to main content

Verified by Psychology Today

Artificial Intelligence

Can ChatGPT Think Now?

ChatGPT 4 got a major upgrade. What does it mean for how we think about LLMs?

Key points

  • The latest release of ChatGPT resurfaced the question: Is it possible for a computer to think?
  • Defining what it means for a computer to think has evolved.
  • Large language models (LLMs) complicate the definition of thinking because they have language.
  • Large language models lack intent—and some believe that is the key to thinking.
Image by Google DeepMind
Is thinking just the transformation of symbols and structures according to rules?
Source: Image by Google DeepMind

The latest release of ChatGPT resurfaced the question that has been swirling around artificial intelligence (AI) for decades: Is it possible for a computer to think? Once again, we’re moving the goalposts for what would be considered evidence that computers can, in fact, think.

Back in the 50s and 60s, pioneers in computer science decided that if a computer could beat a human at a game of chess, that would be convincing evidence that computers could think. (There is lots to debate about why that would be the best test to judge whether computers had turned some kind of corner into rationality, but that’s another topic.) Then, in 1997, what had once seemed like a futuristic vision became a new reality. Deep Blue (a system developed by IBM to play chess) beat chess grandmaster Garry Kasparov. Deep Blue relied on sheer computational power—it could simulate 200 million moves per second. The confetti barely hit the floor before people updated their evaluations on what it meant for a computer to think, noting that this victory was due to brute, computational force, not a mind. The goalpost had moved.

Fast forward nearly a decade, and AlphaGo challenged a Go grandmaster. Now, AlphaGo was a different sort of game—it could not be won by relying solely on programmed moves; instead, AlphaGo was able to learn from its own experience (Buckner, 2024). In fact, the programmer working on the project was shocked and dismayed by what turned out to be the winning move—the programmer, certain AlphaGo had made a fatal error, could not predict how this move would change the game, but the computer obviously could—and the computer won. AlphaGo’s victory in this unpredictable and unprogrammed way still left open the question of whether a computer can think.

Of course, there is good reason for that question to remain open, and part of that is because, as it turns out, defining what it means to have a mind is complicated. A.M. Turing argued that “whether a system has a mind, or how intelligent it is, is determined by what it can and cannot do.” (Haugeland, Craver & Klein, 2023) If a system behaves like a human to such an extent that a human cannot distinguish the system’s "mind" from a human’s mind, that system would be deemed to have intelligence—or to have reason.

That’s one view of what it means to have a mind. Another way to define the mind is to distinguish it from the brain. For example, René Descartes believed having what he called "common sense" may be a way to draw the distinction. Here, he meant common sense as the ability to reason and to judge true from false. In the Cartesian view, humans are different from animals because humans have a mind and animals don’t. And, how did he conclude this: Because humans have speech. “None of our external actions can show anyone who examines them that our body is not just a self-moving machine but contains a soul with thoughts, with the exception of words or other signs that are relevant to particular topics without expressing any passion.” (Descartes: Philosophical Letters, From the Letter to the Marquess of New Castle, 23 November 1646)

Well, large language models complicate this debate—at least in the way Descartes frames it. That is because they have language, but the question remains: do they have knowledge or rationality? Can they learn in the way humans learn? Or, are they just probabilistic, pattern-making systems—what University of Washington linguistics professor Emily Bender, PhD., calls "stochastic parrots."

It's really no wonder we’re confused by this. At the same time cognitive scientists developed models of the human mind and theories for explaining behaviors, computer science emerged as a discipline. Computer systems became an influential metaphor for cognitive scientists who began using words like processing speed, downloading, and storing to describe human neural processes. At the same time, computer scientists borrowed from cognitive scientists using phrases like neural networks to refer to computations of algorithms. Each field borrowing analogies from another may be what has led to our conflating the two.

Ex facie, it seems obvious: a computer is a machine; it is not human and is incapable of knowing the way a human would know, and learning is different for a computer than a human. But, probe a little deeper: How is a computer trained on streams and streams of data different from a toddler learning new words? Are we not all just probabilistic pattern detectors? Neuroscientists and psychologists often refer to the human brain as a prediction machine. So when we say all AI does is make predictions, how do we see that as different from what we do?

Bender would argue that it comes down to intent. A machine may have language but can’t have communicative intent. So, large language models can learn conventional meanings because they are standardized and don’t require interpretations. But communicative intent starts with a goal and an intention. To a computer, “it’s hot in here” might mean something about the temperature and how it interprets hot—maybe anything above 72 degrees. But to a person, “it’s hot in here” might be interpreted as a request to open the window or turn on the air conditioner. The human can account for pragmatics—the meaning beyond the syntax and the semantics. Others may add that thinking requires having mental representations, which a computer does not have.

Proponents of the other side of the debate might argue that intelligence or thinking is simply the transformation of symbols and structures according to rules—and that is it. Computers do it, and we do it, and there’s nothing extraordinary or worth debating.

Much of the hype and controversy around large language models comes down to language and how we define words like intelligence, mind, and thinking. It exposes what it means to have intelligence, what it means to be human, and why we expect machines at once to be both infallible and yet crave for them to be more human (communicate like a human). One must wonder if the natural language used by large language models, such as ChatGPT, Gemini, or Claude, imbues the system with a veneer of intelligence simply because it understands and communicates in natural language. In an extreme Cartesian view, by virtue of these systems using language, they could be considered to have reason. Does that seem like enough?

References

Bender, E. M., & Koller, A. (2020, July). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 5185-5198).

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). doi.org/10.1145/3442188.3445922

Buckner, C. J. (2024). From deep learning to rational machines: What the history of philosophy can teach us about the future of Artificial Intelligence. Oxford University Press.

Descartes, R. (n.d.). In Discourse on Methods, Optics, Geometry, and Meteorology. essay, Hackett Publishing.

Haugeland, J., Craver, C. F., & Klein, C. (2023). Mind design III: Philosophy, psychology, and Artificial Intelligence. The MIT Press.

advertisement
More from Rebecca Dolgin
More from Psychology Today