Skip to main content

Verified by Psychology Today

Artificial Intelligence

The Unbearable Conundrum of AI Consciousness

Will machines one day become self-aware?

Peshkova/Shutterstock
Source: Peshkova/Shutterstock

Machine learning is an application of artificial intelligence (AI) where computers can learn without being explicitly programmed. Technology is advancing towards artificial general intelligence (AGI), where machine intelligence is able to do any intellectual endeavor that a human is able to perform. This trend is fueled by the rising ubiquity of cloud computing, which enables greater computing power, the increasing sophistication of algorithms, the falling costs of data storage and acquisition, and the increasing general availability of data. Futurist Ray Kurzweil, Director of Engineering at Google, predicts that by 2029 computers will have human-level intelligence [1]. Since the 1990s Kurzweil has made 147 future predictions with 86 percent accuracy [2]. If a computer is able to learn over time, and eventually becomes cognitively as powerful as humans, can it learn to be aware of itself?

In humans, self-awareness is developed during early childhood. Reflective self-awareness, when children are able to match their movements with reflections in a mirror, generally starts around the ages of 15-18 months and becomes a trait of a typically developing child by the age of 24-26 months [3]. The same does not necessarily hold true for animals.

In 1970, psychologist Gordon Gallup Jr. devised a method to test if animals could recognize themselves in mirrors with inconclusive results [4]. A few species, such as Asian elephants, orangutans and chimpanzees passed, but only inconsistently [5]. Bottlenose dolphins, killer whales and two captive manta rays passed, but those results are subject to interpretation [6]. Most animals are not self-aware and would not be able to recognize their own reflections in the mirror [7]. In 2008, two European magpies passed the mirror test (administered by Helmut Prior from Goethe University), but not other corvids [8]. There are myriad possible reasons for the mixed results of the animal mirror test. More studies are necessary to reach a conclusive answer.

It goes without saying that inanimate objects such as a rock and toaster are not self-aware, lacking both a brain and sensory input. But can a computer robot outfitted with sensory input capabilities similar to human vision, hearing, touch, taste, and smell be able to recognize itself in a mirror?

Technically, the answer is yes. The robot’s computer brain could be programmed in such a way that if the reflection in the mirror meets a defined sensory input criteria, then a variable “self” is established. The unique identifier could be as simple as a barcode or as sophisticated as a combination of unique physical characteristics similar to the human thumbprint, face, and body. But recognition of a programmed “self” doesn’t make a robot self-aware in the same sense as a human, and hard-coding variables in a computer program is not machine-learning.

Let’s examine the example of a robot with machine-learning capabilities that is also outfitted with sensory input capabilities, and has been given enough training and data to identify a robot, but has not been explicitly programmed to recognize itself. Now set it in front of a mirror. If the robot commands itself to raise its arm in front of the mirror, and it sees the expected movement, would the robot eventually develop a sense that the arm it sees in the reflection is its own and not that of a different robot mimicking its moves? Through learning and experience via multiple iterations of trial and error, can a machine eventually develop a sense of self and become aware?

“Cogito ergo sum” is a Latin philosophical dictum by the French mathematician, scientist, and philosopher of metaphysics, René Descartes (1596-1650) that means “I think, therefore I am” [8]. Descartes, generally known as the father of modern philosophy, studied the fundamental nature of reality and existence itself [9]. He defines thoughts in terms of consciousness [10]. If artificial general intelligence is achieved, and the cognitive capabilities of a machine become indistinguishable from that of a human, technological singularity may reshape the very definition of what it means to be conscious and to exist, forever altering the course of humanity.

Copyright © 2018 Cami Rosso All rights reserved.

References

1. Galeon, Dom., Reedy, Christianna. “A Google Exec Just Claimed The Singularity Will Happen by 2029.” Futurism. 16 Mar 2017.

2. Ibid.

3. Brownell, Celia A., Zerwas, Stephanie, and Ramani, Geetha B. “’So Big’: The Development of Body Self-awareness in Toddlers.” PMC. 2012 May 14.

4. Young, Ed. “What Mirrors Tell Us About Animal Minds.” The Atlantic. Feb 13, 2017.

5. Ibid.

6. Ibid.

7. Ibid.

8. Ibid.

9. Hatfield, Gary, "René Descartes", The Stanford Encyclopedia of Philosophy. Summer 2016 Edition.

10. Jorgensen, Larry M. “Seventeenth-Century Theories of Consciousness.” The Stanford Encyclopedia of Philosophy. Sept 27, 2014.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today