Skip to main content

Verified by Psychology Today

Intelligence

The Intelligence of AI

How smart is AI really?

Key points

  • AI has been argued to be on the verge of general (human) intelligence.
  • Whether AI is really intelligent depends on the one's definition of intelligence.
  • AI looks intelligent because it excels in the perception of capturing patterns.

AI is reputed by some to be approaching general human intelligence. Some might indeed see AI as intelligent as humans, just like AI seems to understand like humans. In an earlier post, I argued that definitions of understanding undergo changes as a function of developments in AI: For decades the argument has been made that computers do not understand language because they are unable to perceptually simulate information. But the examples that demonstrate that AI cannot understand language need to be revisited with AI being able to accomplish those tasks argued to be reserved for human understanding only.

I am not arguing that AI understands language. Instead, I am arguing that our definitions of human understanding may need to be revisited now that AI has "mastered" the criteria for comprehension. We either need to conclude that computers understand language, or we need to revisit our criteria of understanding.

The same applies to intelligence. How intelligent is AI really? Have ChatGPT and Dall-E (to name the most common ones) reached human intelligence, as some would like to argue? Is AI really on the verge of reaching general human intelligence?

Human Intelligence

From a psychology perspective, the question of whether AI reaches general (human) intelligence is an odd one. Or at least it is a question that brings us to a more fundamental question of what human intelligence entails. And that answer is hard to give.

In the early part of the 20th century, Alfred Binet and Theodore Simon introduced an intelligence test that measured the mental age of a child in relation to its actual age to determine the child’s IQ. The test consisted of questions concerning basic vocabulary and the repetition of digits, and the results of that test were plotted against the actual age of a child.

If general intelligence were to be defined according to the Binet-Simon IQ test, computers have long achieved human intelligence. In fact, if one were to take one aspect—calculations—as a measure of intelligence, then the first mechanical calculator in 1642 already mastered intelligence. And the first general calculating computer, the ENIAC (Electronic Numerical Integrator and Computer) in 1946 well outperformed human intelligence.

But such a vocabulary-and-basic-mathematics definition of general intelligence is likely too narrow. This is what psychologists realized in the mid-20th century. In 1949 Raymond Cattel proposed that general intelligence ought to be seen as comprising both crystallized intelligence and fluid intelligence. Crystallized intelligence would include vocabulary, general information, and abstract word analogies. Fluid intelligence includes the basic processes of reasoning such as number and letter series, matrices, and paired associates.

One may argue that AI was already able to capture crystallized intelligence, but not fluid intelligence. But with the recent examples from ChatGPT and Dall-E, AI has apparently mastered both crystallized and fluid intelligenca and, therefore, mastered general (human) Intelligence.

But has it really?

In 1983, Howard Gardner argued that general intelligence should not be seen as crystallized intelligence and fluid intelligence. It should not even be defined as our intellectual potential and something that can be measured. Gardner argued that there are different intelligences, at least eight of them, ranging from visual-spatial, linguistic-verbal, and logical-mathematical, to musical, body-kinesthetic, and inter- and intrapersonal intelligence.

If we define general intelligence according to Gardner’s multiple intelligences definition as the conglomeration of different intelligences (not only basic mathematics and language), AI has not at all reached general human intelligence.

DALL-E/OpenAI
Source: DALL-E/OpenAI

Apples and Oranges

Comparing human intelligence and artificial intelligence is a bit odd for another reason, too. Imagine that one were to ask whether bird flight and airplane flight are the same. Clearly, they are not. Birds fly with flapping wings, airplanes do not.

I leave aside a whole range of other obvious differences between birds and airplanes. Yet, if we were to define flying as moving fast through air, both birds and airplanes have mastered flying. At one level, they do this very similarly—for instance, by both following the four principles of aerodynamics: weight, lift, thrust, and drag. At another level, they do that very differently: one with flapping wings and the other without the flapping.

Just like birds and airplanes, the question of whether AI has reached general (human) intelligence is one that depends on the level of analysis. At one level of analysis (the implementational level), AI and human intelligence are entirely different. At another level (the algorithmic level), they may show some interesting similarities. And, yet, they both solve the same problem: intelligence, however that may be defined.

Pattern Appeal

So why is it then that AI implementations such as ChatGPT look so human-intelligent? Leaving aside the terminology that may be a source of confusion—"intelligence” and “neural networks"—over the years, deep learning, involving sophisticated artificial neural networks, has repeatedly shown some very (artificially) intelligent results. Yet these findings felt more machine-like than they felt human-like.

The answer to why ChatGPT speaks to our imagination is that ChatGPT is excellent in simulating what humans do so well: recognizing patterns and filling in those patterns. It is one thing if a machine can measure the difference in meaning between two words, sentences, or paragraphs, or can make some abstract calculations. It seems an entirely different thing if a machine can follow patterns, make predictions, and suggest intentions.

Cognition

The answer to whether AI is intelligent thus depends on the definition of intelligence—basically, the task at hand (the flying)—and at what level we look at the solving of that task (the use of the principles of aerodynamic solution versus the nature of the wings).

In that respect, the discussion on whether AI is (human) intelligent, is reminiscent of the discussion on whether animals are intelligent. For years, scientists agreed that animals could never achieve (human) intelligence. More recently—apparently we got a bit smarter—an increasing amount of evidence suggests that animals are very intelligent, in certain aspects perhaps more intelligent than humans, but depending on how one defines intelligence.

Primatologist Frans de Waal asked the question of whether we are smart enough to know how smart animals are. Analogously, I would like to ask the question of whether we are intelligent enough to know how intelligent AI is. I propose an answer to that question: It depends on your definitions and levels of analysis.

References

De Waal, F. (2016). Are we smart enough to know how smart animals are? WW Norton & Company.

Gardner, H. E. (2011). Frames of mind: The theory of multiple intelligences. Basic Books.

Louwerse, M. (2021). Keeping those words in mind: How language creates meaning. Rowman & Littlefield.

Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. MIT Press.

advertisement
More from Max Louwerse Ph.D.
More from Psychology Today