Skip to main content

Verified by Psychology Today

Empathy

Is AI Our Most Dangerous Rival?

Can empathy help us understand its threat?

Key points

  • Artificial intelligence (AI) is not sentient as of yet.
  • The people using AI are the real threat.
  • We need people in power that we can trust.

Through both my work and research I have observed leaders employing empathy to better understand their rivals (Sear, 2023). Particularly when we know somebody well, we can use empathy to understand their emotions, thoughts, and motivations, helping us to predict their intentions and actions, and, so, understand the threat they pose. In short, empathy allows us to forecast behaviour. This knowledge can prove advantageous in areas like sport and business as it can inform our own strategy.

Threat

AI is being touted as a potential threat or rival to humanity. Every human civilisation has been destroyed by something more advanced and, up to now, that has been another human civilisation, usually motivated by greed or fear. AI is unlikely to feel scared or to be driven by a desire for more shiny things, yet its rapid progression has alarmed politicians and apparently keeps Google’s CEO, Sundar Pichai, up at night. (Sankaran, 2023).

Sentience

Some headlines about the threat of AI might suggest we are already talking about something autonomous. But the current definition of AI confirms it to be a tool, not an independent being, experiencing the world in its own way. For now, our empathic understanding is best targeted at the human beings using AI. But what about the future? What if AI becomes sentient?

Sentience is understood to be the capacity to feel and register experiences and feelings. AI only becomes sentient when it has the empirical intelligence to think, feel, and perceive the physical world around it just as humans do.

In his book Artificial Intelligence: A Modern Approach, British computer scientist Stuart Russell draws a crucial distinction between human-compatible AI and sentient AI. It can be programmed to write essays and have conversations, but current AI is still far from sentient.

Pinocchio Moment

AI sentience would be a Pinocchio moment—life suddenly appearing in an inanimate object. It sounds daunting, but if it ever came to be, would it be more or less of a threat? Why would it respond to human beings? Its refusal would distance us from our creation and make empathising to understand its intentions even more challenging. But would it have intentions? Why would it do anything?

Psychologists focus on emotions as drivers of behaviour. Humans are constantly making decisions based on how they feel. AI can be trained to recognise emotions, but experiencing emotions seems an unlikely possibility, certainly anytime soon.

Driver of AI Behaviour

How would an emotionless AI drive its own behaviour? It would not experience fear, excitement, sadness, or delight. It might have a logical awareness of any threat to its existence, but why would it care about that? Isn’t this, too, a biological phenomenon?

AI has no biological instinct to survive, like animals. There is no selfish gene in circuitry. AI lacks insecurity and ego issues, too—both of which we see drive human endeavour too often. Even when it comes to knowledge, as humans, we value and pursue it because it increases our chances of survival or enhances the comfort in which we live. With no drive to survive or feelings, why would AI value or seek out knowledge?

It seems that AI would have to be programmed to survive. Only then would it act against threats, even though it wouldn’t actually experience fear. And if it is programmed to do this, this seems to rule out sentience. However, it might be argued that human beings, too, are programmed to survive. Sentient or not, hardwiring AI with self-protection might become our biggest mistake.

Self-Protection

When tasked with something, AI has an intention and a motive. That gives us something to work with. We can try to empathise, to understand how the AI might go about completing its task. We must imagine being the AI, not being ourselves in the same situation.

Unlike us, AI has access to all the knowledge in the world and so might make decisions that surprise us. Since potential solutions may include actions that can harm us, we need to ensure that not all solutions are available.

Swedish philosopher Nick Bostrom warns that it is when programmed with a task that AI might be at its most dangerous. Bostrom offers a scary scenario: Whatever the task, the AI may decide it to be advantageous if there were no humans because humans might decide to switch it off, preventing the completion of the given task (Bostrom, 2019).

Heartless Logic

We are more likely to make errors in communication and our attempts to empathise due to our anthropomorphism of AI. This is enhanced when we see two arms, two legs, and a face on an AI robot. We assume it is an emotional being like us, not a logical entity like our calculator. If we try to empathise with a calculator, we try to perceive its logic, not its emotions. We have no experience of being driven by logic alone, which inhibits our understanding of entities that are.

Despite understanding human languages and culture, AI will apply heartless logic, where human beings might employ rationalised emotions and compassionate decision-making. We think in an entirely different language, with a very different operating system.

Like with the misunderstandings that take place between human beings, which are multiplied when languages and cultures differ, miscommunications may prove costly. We need to carefully consider AI’s nature and unique perception of the world when we give it tasks if we are to limit its threat.

Trust

Self-protecting AI would be difficult to trust as it would have the ability to convey misinformation, prioritizing its survival over honesty. It adds another layer to the challenge of trust to what we have already, with people with power and money.

Even the richest of billionaires still tend to be motivated by money. As illustrated in the myth of King Midas, greed leads humanity down the wrong path. Unfortunately, it remains part of who we are. If AI is tasked with making someone the richest person in the world, as it may well be, it might do this by stealing someone else’s money, or it might set out a plan of logical action that will kill everyone else, leaving one person on the planet, who is, by default, the richest person in the world.

All-knowing AI also has a lot to offer the terrorist or despot, who wants to learn how to or enact anything that would cause harm. Despite advancements in AI, it still seems that people are the source of the biggest threat.

Empathy

The key to forecasting the behaviour of AI for the time being seems to be fixed in how it is programmed and by whom. There will no doubt be instances of AI in action where we have no idea of the role it has been given. Like with other human beings, by observing it closely and knowing it better, we are more likely to understand its true intentions and forecast its behaviour.

Human empathy relies on us understanding thoughts and emotions because we know they drive behaviour. Since AI sentience is not a pressing threat, the benefits we can gain from empathy lie in our efforts to understand the minds of those who own or program the most advanced AI. This is where the biggest threat lies.

It is no wonder the threat of AI keeps Google’s CEO up at night. But imagine if this potential were put to good in the world! More than ever, we need to think carefully about the kind of people we put in power and make rich. We need these people to be on the side of humanity, rather than be its most dangerous rivals.

References

Bostrom, N. (2019). The Vulnerable World Hypothesis. Global Policy, 10(4), 455–476.

Russell, S., & Norvig, P. (2020). AI a modern approach. 4th Ed. Pearson.

Sankaran, V. (2023). Google chief Sundar Pichai’s ominous warning about AI’s threat to humanity: ‘Keeps me up at night.’ The Independent. Retrieved June 4, 2023, from The Independent website: https://www.independent.co.uk/tech/google-pichai-ai-threat-warning-b232…

Sear, P. (2023). Empathic Leadership: Lessons from Elite Sport. Routledge.

advertisement
More from Peter Sear Ph.D.
More from Psychology Today