Intelligence
Human-Like Consciousness and Human-Like Intelligence
Human-like consciousness and human-like intelligence might evolve differently
Posted August 28, 2017
What is the relation between human-like consciousness and intelligence and other possible forms of consciousness and intelligence, such as other species and artificial intelligence (AI)? Murray Shanahan proposes four charts that capture key aspects of these potential relations. The charts display the degree of human-likeness and the capacity for consciousness of actual and possible agents, each with its own axis (the H axis stands for “human-likeness” and the C axis for “capacity for consciousness”; together they comprise the H-C plane). There is a chart for biology and another one for AI. Then there is a synthesis of these charts that includes possible alien forms of consciousness and intelligence, from extraterrestrial life to super-intelligent (general) AI. An overall landscape of these combinations is represented in a final chart of regions in the H-C plane (the human-likeness and consciousness plane). Together, these charts portray how human-like qualities and consciousness are present in beings other than humans, which has important implications for the development of AI.
There are many interesting aspects about these charts, but we will focus on the proper scope and interpretation of the charts and whether or not they include all the possibilities. An assumption central to these graphs concerns the role that behavior plays in understanding consciousness in other species and general AI—a form of anthropocentrism in which we compare other possible conscious beings based on what we know about ourselves. This is a natural assumption as consciousness is a subjective phenomenon. Although the charts may be interpreted as an invitation to move beyond this kind of anthropocentrism, since one of the axis is human-likeness there will always be a role for humans in the comparison of any form of conscious awareness. Does the same hold for intelligence? In other words, should the same anthropocentric assumption be used in comparing forms of intelligence?
In a previous post, we addressed aspects of this anthropocentric issue by pointing out that evolution is an important constraint on the degree to which biological species are consciously aware, regardless of human-likeness. This does not mean that human-likeness is not an important point of comparison. For instance, in the evolution of different forms of attention, some of these forms may be essentially conscious in humans. But the idea is to make the issue of the degree and quality of conscious awareness one that is more amenable to empirical inquiry, rather than anthropocentric introspection and behavioral judgment. We have also addressed some issues regarding whether or not super intelligent AI will be capable of being consciously aware in a human-like manner. We proposed that empathy and social intelligence may not be achieved by means of super-powerful information emulation—a serious limitation for AI. Even if it produces identical behaviors these might be just imitation.
Here we want to introduce a more constructive perspective on the H-C plane: a kind of extra dimension to the H-C plane based on the consciousness and attention dissociation. Suppose human-like consciousness is found only in other living species, and may not be reproducible by AI, particularly as it concerns emotions and their motivational consequences based on biological constraints. This would not entail they cannot surpass humans in their capacity for accessing information and thinking intelligently. These may be two different evolutionary paths, one for consciousness and another one for intelligence. Could these paths give rise to distinct kinds of conscious awareness? Maybe, but the homeostatic biological constraints on our own awareness may prevent us from fully empathizing with these alien kinds of consciousness. An intriguing possibility the dissociation between consciousness and attention opens is that these intelligent systems, if they truly develop human-like intelligence and surpass it, will have attention routines similar to ours. So they would not be total strangers to us: we could communicate, interact and even understand them. The problem would be that we could not relate to them in terms of our specific, biologically based, kind of conscious awareness.
An overlap in attention routines with other species and AI could be understood in terms of epistemic forms of agency (forms of agency that lead to beliefs and knowledge; see Fairweather and Montemayor, 2017). But the immediate connection our conscious awareness has with our biology and how engaged emotions are with our biochemistry, presents the possibility that the overlap in terms of consciousness, rather than intelligence, will never be perfect. In any case, the relation of consciousness to our biology and the independence of intelligence in AI are important topics to address in the future. If there is a dissociation between intelligence and consciousness, this could have very broad implications. Animals have a basic connection to emotions through biology, which is not a merely informational connection. AI may play important roles in the politics and ethics of the future, but it is unlikely that they will understand moral emotions the way biologically-based organisms do. On the behavioral side, our responses to others are not mere information events in which we merely form a belief. They are deep aspects of who we are, for instance, in cases in which we respond to the pain of another person. Here humans and their biology play center-stage (the anthropocentric approach is justified).
But this may not be the case with human-like intelligence, which may actually be part of the development of super intelligent, general AI. Intelligence and subjective awareness are markers of consciousness. If the dissociation is correct, a world without consciousness would be a world without the grip of emotion and empathy, but not a world without intelligence (although issues about human inquiry and motivation would remain difficult to assess). Two important roles, one connecting us with the evolution of species and the other one with intelligent beings, would be at stake. Much work is ahead of researchers exploring the differences between awareness and intelligence in all their possible varieties (see for instance, Kevin Kelly’s proposal that there will be many different kinds of intelligence, challenging contemporary anthropocentric assumptions and optimism about general AI). Whether or not something like spirituality or wisdom will ever enter the picture in the relation between consciousness an intelligence is yet another question to ask.
For now, we wish to highlight two aspects of the H-C chart, one concerning human-like consciousness and the other one concerning human-like intelligence. These aspects may have an entirely different evolution, one biology-bound that may hopefully lead to increased forms of awareness, and a robotic-computational one that will revolutionize forms of intelligence and lead, presumably, to vastly increased forms of access to knowledge. Whether the increase in intelligence will correlate with an increase in awareness is dubious and, at the very least, it is highly speculative to predict that an increase in artificial intelligence will correlate with an increase in consciousness. If confirmed, this hypothesis means that our minds have a dual aspect, one related to awareness and another one related to intelligence (or one related to awareness and the other related to attention routines; see Haladjian and Montemayor, 2015, 2016). These two aspects may evolve differently, be instantiated differently (awareness requiring biological or homeostatic constraints), and have independent paths in the future.
Carlos Montemayor & Harry Haladjian
References
Fairweather, A. and Montemayor, C. (2017). Knowledge, Dexterity, and Attention: A Theory of Epistemic Agency. Cambridge, UK: Cambridge University Press.
Haladjian, H. H., & Montemayor, C. (2016). Artificial consciousness and the consciousness-attention dissociation. Consciousness and Cognition, 45, 210-225.
Source: Free Vector/Meditating RobotHaladjian, H. H., & Montemayor, C. (2015). On the evolution of conscious attention. Psychonomic Bulletin & Review, 22(3), 595-613.
Kelly, K. (2017). The Myth of a Superhuman AI. Wired. https://www.wired.com/2017/04/the-myth-of-a-superhuman-ai/
Shanahan, M. (2016). Conscious exotica. Aeon. https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be…