Skip to main content

Verified by Psychology Today

Artificial Intelligence

Selfhood, Artificial Intelligence, and Robotic People

What is selfhood, and is AI getting there?

Key points

  • To understand whether AI is achieving selfhood, it's useful to have a biophysical understanding of what selves really are.
  • Selves are fragile and therefore have to make efforts to not degenerate
  • AI computers are durable, not fragile, and therefore don't struggle for their own existence.
  • Automation makes our environments more reliable and secure, which makes some people relax into dangerously robotic ways of moving through life.

Is AI finally achieving selfhood? You’ll find lots of opinions about this, many of them based on loose, if-it-walks-like-a-duck parallels.

Here’s my opinion based on 25 years research attempting to explain what selves are and how they emerged within nothing but physical chemistry.

The big difference between selves and non-selves is that we exist by persistence. Non-selves—everything from rocks to computers—exist because they’re durable. Selves aren’t durable. We’re fragile, yet we’ve somehow managed to survive for an uninterrupted 3.8 billion-year run.

Selves make functional interpretive effort, which I’ll call “responsiveness." Now, one could say that a wall “responds” to you hitting it, but living responsiveness is different. A wall isn’t making functional interpretive effort to keep itself in existence.

There’s really something to the idea that a human has a body, heart, and mind, though this three-ply abstraction needs some tidying up, which I’ll attempt here. Body, heart, and mind are three organs. We use them as metaphors for three responsiveness processes:

  1. Body: Basic responsiveness, universal to all organisms.
  2. Heart: Feeling or emotional responsiveness, universal to all animals.
  3. Mind: Conceptual responsiveness, unique to humans as a function of our limitless capacity to use symbols, e.g. language.

Let’s unpack these a bit.

Basic responsiveness (body): You generate 240 billion new cells every day without thinking or feeling it. That’s an example of your basic responsiveness chugging away to keep you alive, regenerating you’re fragile selfhood in a universe where everything falls apart. In popular psychology, we often talk as though people are just their feelings and thoughts when, in fact, our basic responsiveness is most fundamental. Without it, we’re dead. Self-regeneration is even more fundamental to biology than self-reproduction, since you can’t reproduce if you’re degenerated.

Felt responsiveness (heart): All animals feel—visceral sensations for what works and doesn’t work to keep them self-regenerating. Food and sex feel good because they're good for your self-regeneration. Wounds warn of threats to self-regeneration. Still, feelings are tricky, one step removed from basic responsiveness. Things that feel good aren’t always good for you. Animals learn by feel. We human animals do too—far more than we like to admit. Ironically, it feels good to pretend that we’re strictly logical, judging concepts as though we can “leave our feelings at the door.” We can’t. Feelings are very convincing.

Conceptual responsiveness (mind): Humans are the one fully symbolic species. We use symbols—culturally-maintained languages—that enable us to conceive of anything real or imaginary. Language-based concepts make us the delusional and visionary, anxious, and denialist species that we are.

Two further comments about our three-ply nature:

Checks and balances: Body, heart, and mind are not a simple hierarchy. Our three kinds of responsiveness alternate dominance like a healthy three-branch checks-and-balances government. For example, a stomachache can stir anxious concepts, and anxious concepts can cause stomachache. A failing body alters both heart and mind. Conversely, happy hearts and minds can improve bodily well being.

“Heart” is a contronym: A contronym is a term that means opposite things. A selfish lust is a “heart’s desire,” but we call selfish people “heartless.” Our most convincing feelings are self-serving. Since our nerve endings don’t extend into each other’s bodies, it takes effort to overcome selfish feelings to set aside our heart’s desires to provide heartfelt compassion. But that’s another story.

With all that in mind, what can we say about AI’s potential for selfhood?

Basic responsiveness (body): AI is software instantiated on durable hardware designed to not degenerate. Computers don’t have to do work that works to keep them working in their workplace. Thus, AI doesn’t have the most basic and universal feature of selfhood. It doesn’t struggle for its own existence. It doesn’t have to try to stay alive by preventing it’s own degeneration.

Feeling responsiveness (heart): AI feels and yearns for nothing. Energy is supplied to it. We can design AI to repeat behaviors that are “rewarded,” but AI doesn’t feel rewarded.

Conceptual responsiveness (mind): Words, symbols and concepts mean nothing to AI. To say that AI interprets or understands the symbols it processes is like saying that a washing machine knows the meaning of “spin cycle,” or that books think. Concepts are the most virtual or abstract feature of human responsiveness. AI is a further abstraction of such abstraction. One could program AI to generate any symbolic output anything. Its output is meaningless to the AI itself.

AI is designed to do a convincing imitation of human behavior for human interpreters. Convinced, we might take AI’s word for it that the AI is sad. But AI is no sadder than the pixels of a crying Pixar character on a movie screen. Computers don’t think and minds aren’t computers.

Should we worry about AI? Sure. We should be concerned about any automated processes, including humanity's incorrigible habits. Should we worry about AI coming alive and taking over? We’re already at risk of automated systems taking over, mindlessly governing our lives.

Should we be impressed by AI? Yes. It’s impressive how convincing it has become and how useful it can be for us.

Automation lifts burdens off humans, freeing us to either focus on other priorities or to do less, our minds atrophying because they're no longer necessary. For example, convenient home appliances made it possible for 1960s’ housewives to enter the workforce or to drink and watch TV.

In his book Our Own Worst Enemy, GOP political scientist Tom Nichols exposes a paradoxical trend: When we feel safer and freer we don't necessarily become calmer and more thoughtful. We often become restless, cocky jerks indulging in fake moral outrage and public vanity projects.

My biggest worry isn’t AI but people assuming that with automation, they're so safe and free that they become what I'll call “feel-bots,” robotically drunk on positive self-regard by means of a simple trick: mindlessly assigning all virtue to themselves and all vice to their rivals.

“Feel-bots” are not robots. They’re flesh and blood, letting their feelings take over at the expense of body and mind. In a world simplified by automation, they dumb down to one line of sado-narcissistic self-programming: If it sounds good it’s about them. If it sounds bad, it’s about their rivals.

Here's a useful article on AI and how it exposes the ease by which we can mistake language fluency for thinking.

And here's a video I made about how easy it is to become robotically self-affirming:

References

Nichols, Tom (2021) Our Own Worst Enemy: The assault from within on modern democracy. NYC, NY: Oxford University Press.

Sherman, Jeremy (2017) Neither Ghost Nor Machine: The emergence and nature of selves. NYC, NY: Columbia University Press.

advertisement
More from Jeremy E. Sherman Ph.D.
More from Psychology Today