Skip to main content

Verified by Psychology Today

Intelligence

The Age of Artificial Consciousness

Are we about to see a new form of advanced artificial intelligence in machines?

Everyone talks about artificial intelligence these days. Are we on the verge of the singularity — the moment in which artificial intelligence surpasses human intelligence?[1] Can we afford to design unethical robots, programmed with clever software like the one that caused the recent “dieselgate” controversy at Volkswagen? Will robots replace doctors and other care practitioners, or can we even make robots that properly care for living organisms like humans? If so, can machines then actually learn to empathize with us and perhaps even have emotions? And are we, without even realizing it, becoming more robotic ourselves by depending on so many interfaces with machines?[2]

RIKEN-TRI Collaboration Center for Human-Interactive Robot Research
Riba, the medical robot that can help carry patients.
Source: RIKEN-TRI Collaboration Center for Human-Interactive Robot Research

These and related questions are the subject of much debate, both in the mainstream media and in academia. It fascinates us to think of our species as one evolutionary step in a large number of progressive steps that will eventually lead to higher forms of intelligence. It also fascinates us to think of ourselves as vulnerable to our own creations as much as we can be “advanced” by them — and this can lead to a god-like kind of obsession. It is no surprise that the topic of artificial intelligence (AI) has become central in discussions about emotion, ethics, consciousness, and self-awareness. AI has reached beyond its original realm — that of intelligence — and is now being discussed as an important issue with respect to topics that do not seem to be obviously connected to artificial simulations of intelligence, such as emotions, empathy, ethics, and even aesthetics.

This expansion of the scope of AI is not without merits. The intimate relationship between emotions and human intelligence seem to justify it (see Picard, 1997). The fact that AI is about information and that the neural architecture of the brain also deals with information (and its processing) gives plausibility to AI as a theory of not only intelligence, but also of emotions and possibly consciousness. Michael Graziano, for instance, has recently claimed that consciousness in AI may just be an engineering problem. This implies that conscious awareness and the emotional aspects of experience are possibilities for machines once we overcome the current technical limitations.

Such views, however, are problematic: Artificial intelligence does not entail emotional intelligence, and the prospects for artificial consciousness are bleak. Our main reason for disagreeing concerns the consciousness and attention dissociation (CAD) we described in previous posts. According to CAD, the type of consciousness that manifests in our subjective experiences is intrinsically related to emotional arousal, experiences of moral approval or rejection, and experiences of the sublime and the beautiful; however, forms of intelligence and rationality associated with the attentional processing of features, objects, and events are not intrinsically related to any of the above.

"Bradshaw rock paintings" by TimJN1 - Bradshaw Art
Example of a cave drawing from approximately 25,000 years ago, found in the Kimberley region of Western Australia.
Source: "Bradshaw rock paintings" by TimJN1 - Bradshaw Art

One can understand the importance of these feelings (the beautiful, the good, and the emotionally powerful) by examining how homo sapiens emerged as a species that has distinctive and strongly empathic forms of social intelligence. Cave paintings from over 40,000 years ago reveal a species concerned with art and social values. Indeed, a distinction between good and bad marks all societies that left any trace in the historical record. Mythologies, moral systems, and religion characterized and enriched the repertoire of emotions humans have toward each other. And forms of empathy, such as non-instrumental imitation, are so pervasive in humans that the ultra-social aspects of our interactions are robust from childhood, independently of cost-benefit analysis or other calculations. Without denying the intricate relationship between emotion and intelligence, which is undeniably strong, we believe that purely algorithmic forms of simulated intelligence will never produce the emotional empathy that characterize human beings.

Empathy, we argue, is fully dependent on phenomenal consciousness — the type of consciousness that is directly linked to our subjective experiences. Emotional empathy is concrete and requires a conscious attention to the quality and intensity of such experiences. Since emotions are related to forms of attention, one can support the view that emotions are rational. But that is not really very informative, specifically when it comes to the computability of emotions. Rationality is too general and many models satisfy it, including models in which empathy plays no role whatsoever. It may even be the case that all computations that satisfy artificial intelligence in robots and machines will lack anything resembling empathy. In contrast, phenomenal consciousness provides a stable reduction of information for the sole purpose of producing intense immediate engagement, including empathy.

Even with our daily interactions with computers, many of which are for social purposes, there still is no reason to think they can take over experiencing emotions for us. Think about the feelings you may have when you read text messages, emails, or posts on Facebook. Soon there may be ways for your computer to tell what emotions you are having (see Affectiva) while you are interacting with it in such instances. But even if we get to that point of computer emotion recognition, those emotions are your emotions, not the computer’s emotions. This point is important, because even if we develop a robot fully capable of responding to our emotions by “paying attention” to the right cues, we still would be the only ones feeling those emotions and, if anything, only projecting them onto the machines.

Referring back to the CAD framework, these ethical and emotional issues may have a cognitive foundation in the distinct roles that consciousness and attention play. We may be able to program ethical behavior for an AI system (for example, “do not cause bodily harm,” “do not take something without permission,” “do not deceive”) by giving the robot a series of attention-like routines. From those programmed routines, however, it does not follow that emotional empathy will emerge in such systems. Looking at the research on emotions from the neuro-psychological perspective should reinforce that point.

By ckroberts61 via Wikimedia Commons
Source: By ckroberts61 via Wikimedia Commons

Regardless, AI promises to continue to change the way we live more profoundly than we may understand. The social, political, and moral consequences of AI may be much deeper than those of any previous industrial or scientific revolution. But there is, we believe, unfounded optimism in what it can do to improve our moral and emotional lives. There is also unfounded pessimism about its negative consequences, given that one key outstanding question is how to achieve high-level cognition, like consciousness and attention, in artificial agents. In the end, it is unlikely that machines can develop rich forms of either consciousness or attention (even if attention can be understood more “rationally”).

By focusing on the dissociation between the mechanisms for attentional processes and consciousness (the CAD framework), we have a way of understanding why simulated emotion cannot be truly felt emotion — even when simulated intelligence can be seen as intelligence. The expression “I am not a robot” was never meant to capture the idea that “I am more intelligent than a robot” but rather that “I am a consciously aware, emotionally empathic, human being.” Emotions and empathy are far too human to be truly experienced by machines...

– C. Montemayor and H. H. Haladjian

Notes:

1. In addition to the paper by David Chalmers (see references), here are some links about the singularity to check out:

2. Some argue that we may be impairing our natural navigation abilities if we continue to rely on technology like GPS. See:

References:

Chalmers, D. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies, 17(9-1), 7-65.

Hyung-Il, A., & Picard, R. W. (2014). Measuring Affective-Cognitive Experience and Predicting Market Success. Affective Computing, IEEE Transactions on, 5(2), 173-186. doi:10.1109/TAFFC.2014.2330614

Picard, R. W. (1997). Affective Computing. Cambridge, MA: MIT Press.

advertisement
More from Harry Haroutioun Haladjian Ph.D.
More from Psychology Today