Skip to main content

Verified by Psychology Today

Motivation

Will Robots Ever Have Emotions?

To be emotional like people, robots need bodies, appraisals, and culture.

Humans have emotions such as happiness, sadness, fear, and anger; and maybe other animals have them too. Robots are getting increasingly smarter, for example, the driverless cars that are now navigating city streets. What would it take to make a robot emotional, and would we ever want them to have that capacity?

According to obsolete ideas, rationality and emotion are fundamentally opposed because rationality is a cold, calculating practice using deductive logic, probabilities, and utilities. But there is abundant evidence from psychology, neuroscience, and behavioral economics that cognition and emotion are intertwined in the human mind and brain. Although there are cases where emotions make people irrational, for example when a person loves an abusive spouse, there are many other cases where good decisions depend on our emotional reactions to situations. Emotions help people to decide what is important and to integrate complex information into crucial decisions. So it might be useful to try to make a robot that has emotions too.

Another reason for wanting emotional robots is the prospect that they will be used to look after human beings, as is increasingly common with old people in Japan. Having robots with emotions might make them better at understanding and caring for people.

Moreover, as robots became become more capable of autonomous actions, there is a greater need to ensure that they act ethically. We want robots on highways and battlefields to act in the interests of human beings, just as good people do. But ethics is not just a matter of cold calculation, needing to take into account emotional processes such as caring and empathy. The emotional makeup of human brains makes us capable of caring about other people and understanding them empathically. So if robots are going to be ethical in the way that people are, they need emotions.

Estimating the feasibility of making robots emotional depends on understanding what makes people emotional. There are currently three main theories about human emotions, based on appraisal, physiology, and social construction. The cognitive appraisal theory says that emotions are judgments about the relevance of the current situation to a person's goals. For example, if someone gives you $1 million then you will probably be happy because the money can help you to satisfy your goals of surviving, having fun, and looking after your family. Robots are already capable of doing at least a version of appraisal, for example when a driverless car calculates the best way of getting from its current location to where it is supposed to be. If emotions were just appraisals, then robot emotions would be just around the corner.

However, human emotions also depend on physiology. Responses such as being happy to get a pile of money are tied in with physiological changes such as heartbeat, breathing rate, and levels of hormones such as cortisol. Because robots are made of metal and plastic, it is highly unlikely that they will ever have the kinds of inputs from bodies that help to determine the experiences that people have, the feelings that are much more than mere judgments. On the theory that emotions are physiological perceptions, robots will probably never have human emotions, because they will never have human bodies. It might be possible to simulate physiological inputs, but the complexity of the signals that people get from all of their organs makes this unlikely. For example, the digestive tract contains 100 million neurons that send signals via the vagus nerve to the brain, based on the activities of billions of stomach cells and bacteria.

The third prevalent theory of emotions is that they are social constructions, dependent on language and other cultural institutions. For example, when $1 million falls into your hands, your response will depend very much on the language with which you describe your windfall and the expectations of the culture in which you operate. If robots ever get good at language and form complex relationships with other robots and humans, then they might have emotions influenced by culture.

I think that these three theories of emotions are complementary rather than conflicting, and the new semantic pointer theory of emotions shows how to combine them in brain mechanisms. Robots are already being built that have some of these brain mechanisms operating on neuromorphic chips, which are computer chips that mimic the brain by implementing millions of neurons. So maybe robots could get some approximation to human emotions through a combination of appraisals with respect to goals, rough physiological approximations, and linguistic/cultural sophistication, all bound together in semantic pointers. Then robots wouldn't get human emotions exactly, but maybe some approximation would perform the contributions of emotions for humans.

The result would be important for worries about the future of humanity, as robots and intelligent computers become more prominent. One of the main concerns about the possibility of fully intelligent and independent robots is that they may act only in their own interests and therefore become harmful to humans. Building robots capable of caring about us might be one way of forestalling technological disaster. Unfortunately, by that time robots will be building robots, and they may prefer to sidestep emotions in favor of their own unpredictable goals.

advertisement
More from Paul Thagard Ph.D.
More from Psychology Today