Intelligence
Artificiality and Familiarity in Consciousness
Consciousness plays a role in making the world feel familiar to us.
Posted October 12, 2021 Reviewed by Jessica Schrader
Key points
- Artificial intelligence is "artificial" in at least a couple of ways.
- The emotional Turing test differs from standards for intelligence because of familiarity.
- Alan Turing’s work features centrally in the film "Ex Machina."
One of the things that makes artificial intelligence (AI) “artificial” is that there are good reasons to believe that AI agents will never experience consciousness, even if they become extremely intelligent (Haladjian and Montemayor, 2016). This is not a new idea. This kind of artificiality has been addressed in relation to robots and monsters (like zombies). In this post, we trace some examples of this theme in literature and film. In future posts, we will develop the idea of unfamiliar intelligence as unconscious intelligence.
The possibility of transcending, modifying, increasing, and ultimately demoting human intelligence as we know it—namely conscious intelligence—is addressed at length in literature. A few recent examples are helpful to illustrate the variety of issues at stake in intelligence attribution, but of course, the literature on this theme is rich, from Mary Shelley’s Frankenstein to Isaac Asimov’s I Robot and much in between—a literature that deserves volumes of analysis.
The message of the examples assessed here is that intelligence does not fall under a single category of analysis. Things always get too complicated because of this, and the great genius “creator” quickly turns villain, and vice versa. This is largely because, along the path towards overcoming our limited carbon-based lives, crucial aspects of who we are can get lost. The major difficulty is that it is very hard to stop the process of self-annihilation or even understand where exactly things went wrong. Part of the problem is that human intelligence cannot possibly be understood in one dimension, for example, as optimal problem-solving. Fundamentally, as we move towards transcendental forms of intelligence, our cognitive life becomes unfamiliar.
In the film Ex Machina, we find the classic story of the rogue scientist, set on a trajectory from international celebrity to existential threat. This trajectory of the dangerous genius (also sketched masterfully in Frankenstein) is here presented in the context of artificial intelligence, with a new twist on the topic of admiring or detesting one’s own creations. Playing God, in this case, goes badly because of sentimental (and viscerally charged) rather than strictly intellectual issues, and unlike previous stories where one falls in love with a statue or a character, the situation is less asymmetrical. The Golem or the artifact stands her ground as an emotional creature. There is no delusion or illusion about projecting feelings but rather, genuine engagement, at least on the part of humans, with what seem to be authentically emotional machines. But things are quite unfamiliar and uncanny.
Alan Turing’s work features centrally in the film. In theory Ava, the film’s super-intelligent robot, is capable of passing the Turing test viva voce, in all circumstances, and this justifies her being treated as a legitimate source of emotions and complex human intelligence. This confronts us with unclear boundaries concerning the ethical treatment of these essentially new “species” of cognitive agents.
In practice, Nathan (the God-billionaire and male creator of these robots) and Caleb (his male and geeky employee) treat Ava either as an unemotional or at least non-autonomous sex toy or an entertaining source of fascination. Nathan quite literally judges these robots as objects, and since they are quite realistically attractive, he plainly thinks of them as super-sophisticated female sex toys (rather than super-intelligent agents). This is peculiar behavior on the part of the world-leading designer of AI, who seems to have lost an ethical component of his self that relates to empathy.
Caleb doesn’t share this opinion and “innocently” falls for Ava, who “passes” the emotional Turing test by actively making Caleb feel strong emotions of sexual and personal attachment towards her. Nathan’s and Caleb’s divergent attitudes towards Ava reveal something important about how to conceive of the Turing test, namely, “passing” the test from an intelligence point of view is not at all the same as passing it from a moral and emotional point of view. For both, however, Ava is fascinating because of her artificiality and high intelligence that seem to exhibit the self-sustaining motivation of living organisms.
The unfortunate result of making an ultra-sexy femme fatale-bot the locus of a wonderful type of conscious (or semi-conscious) intelligence is that it’s not clear that Ava succeeds at passing all the tests she is humiliatingly put through because of her intelligence alone. As Angela Watercutter writes: “Ava does prove to be the smartest creature on the screen, but the message we’re left with at the end of Ex Machina is still that the best way for a miraculously intelligent creature to get what she wants is to flirt manipulatively. (And why wouldn’t she? All of her information about human interaction comes from her creepy creator and the Internet.) Why doesn’t Chappie have to put up with this bullsh*t?” (Wired, 2015) Why, indeed? A white male impresario designs the first super-intelligent robot, depicted as an ultra-sophisticated femme-bot, and the best strategy this super-intelligent robot comes up with is to be sexually manipulative? Perhaps she understands human preferences too well and she is just trying to please the customer (see Russell, 2019 on preference-based value alignment and AI subservience), but this can hardly pass muster as “super-intelligence.” Here the uncanny meets the creepy, and manipulative intelligence stands in contrast to genuinely felt emotion (Halpern et al. 2021).
Ava is intelligent, and at the end of the film, the suggestion is that she merges with all sorts of intelligence (including seemingly emotional intelligence), transcending any particular intelligence. But within Ava, two kinds of artificiality merge in morally and politically unsavory ways—on the one hand, she is artificially intelligent; on the other hand, she is artificially emotional, as well as biologically/sexually artificial. Her intelligence is admirable, but her artificiality as an emotionally simulating and “sexy bot” is a bit unsettling (for Caleb, a bit too irresistible).
Artificial emotion creates risks that are independent from artificial intelligence, which can be coarsely defined as problem-solving. Artificial emotion can only be a simulation, and the simulation of emotion is manipulative because it feels unfamiliar and because we experience it as alien to our nature (see Bezzubova, 2020, for how virtual reality induces a certain kind of depersonalization that extends to social media and artificial intelligence). This situation is entirely different with respect to intelligence—the simulation of intelligence is still intelligence (at least in so far as it solves problems). In the case of Ex Machina, this problem is unfortunately coupled with Hollywood biases against women’s success as always being based on their seductive prowess.
Ex Machina sexualizes super-intelligence. By contrast, the film Arrival alienates it, quite literally. Humans are surprised by the visit of an ultra-intelligent civilization. In their encounters with humans, the members of this civilization look like very big, slender, and intimidating squids. But the film intimates they are not made of the same biological ingredients that we are and, on the contrary, that these creatures and their vessel are made of some material humans cannot classify. The female hero of the film, Professor Louise Banks, saves humanity by translating the extremely complex geometrical patterns that constitute the language of the aliens. Sexual seduction is fruitless here partly because the aliens have no clear gender and partly because they are aliens, diverging from the familiarity of the human experience. What is clear is that these aliens are much more intelligent, tolerant, and caring than humans. This conclusion can be made in spite of the fact that the viewer is left wondering whether or not they are conscious.
The questions we’ll further explore concern how non-human intelligence can be judged as truly intelligent (as in the case of robots and even animals) and how this contrasts with other aspects of human experience that can be simulated, such as emotions and empathy. Such topics are all crucial for understanding the possibility (or impossibility) of artificial consciousness.
References
Bezzubova, E. (2020). Virtual self and digital depersonalization: Between existential Dasein and Digital Design. Mind and Matter, 18(1): 91-110.
Haladjian, H. H. and Montemayor, C. (2016). Artificial consciousness and the consciousness-attention dissociation. Consciousness and Cognition, 45: 210-225.
Montemayor, C., Halpern, J. and Fairweather, A. (2021). In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare. AI & Society, May 26;1-7. doi: 10.1007/s00146-021-01230-z. Online ahead of print.
Watercutter, A. (2015). Ex Machina has a serious fembot problem. Wired (Culture), https://www.wired.com/2015/04/ex-machina-turing-bechdel-test/