Skip to main content

Verified by Psychology Today

Artificial Intelligence

Teaching AI to Think Like Us

New research suggests computers can identify emotional arcs.

Worries about artificial intelligence abound. There are pressing questions about how to keep autonomous robots from ending human lives—or even bringing about our extinction. Many think that the goal is to develop ways of programming AI to make decisions like we do. Even if we aren’t always moral creatures, human beings are the most morally upstanding citizens we know of. But it’s proving extremely difficult to program AI to think like us.

Hadeer Mahmoud/Flickr
Source: Hadeer Mahmoud/Flickr

In light of all this angst, a new paper by a team of researchers may signal a welcome advance. The researchers were able to train a computer to identify different varieties of emotional arc found in stories contained in the Project Gutenberg database. The computer was, for example, able to identify stories that fit the “rags to riches” emotional arc, constituted by a simple rise, and a different set of stories that exhibit the “Cinderella” emotional arc, constituted by a rise, fall and then rise again. Their conclusions focus on what this tells us about how many core emotional arcs there are (six!) and which arcs are downloaded most (“Cinderella” and “Oedipus”). They also conclude that we may be able to use these findings to computer-generate “compelling stories” and construct persuasive arguments. Their final conclusion, however, is most interesting: they think this approach could help us to teach “common sense to artificial intelligence systems.”

Though the authors do not expand on this final conclusion much in this paper (but see here), there seems to be a clear connection between the ability to identify emotional arcs and the ability to deliberate like a human being. It’s a truism that humans sometimes make decisions on the basis of emotional assessment. We decide to eat because we’re sad, or avoid risk because we’re nervous. But there seems to be more to it than the simple observation that we are creatures whose judgments and actions are influenced by our emotions.

Over the past couple of decades, J. David Velleman has been developing an account of human agency that portrays us as self-understanders. He contends that your actions issue from your assessment of what it makes sense for you to do, in two senses. On the one hand, it may fit your understanding of your own character—you can make sense of yourself as someone who would do that. When we reason about what to do in this sense, we are self-explainers who engage in “causal-psychological self-understanding.” On the other hand, a piece of behavior may be intelligible for you to do because it satisfies your capacity for narrative explanation. It fits into your life story by contributing to or completing a familiar emotional arc.

Conceived of in this way, characteristically human deliberation comes in two flavors. And the results from the study mentioned above speak to the possibility that AI might be able to mimic the one that involves emotions. Though it’s not the same as having emotions, the ability to identify emotional arcs that are significant in human narratives may allow an AI to grasp its own behavior in the same terms as we, at times, grasp ours. An AI that decides to do something because that thing would complete a familiar emotional arc would be making a decision in a recognizably human way. It would be grasping what it is up to in terms of the enacting of a narrative. This mode of decision-making is second nature to us. Maybe there’s reason to hope that it may be learned by an AI.

Perhaps this hope can provide some measure of comfort in the face of pronouncements about the impending robot apocalypse. Just as human decision-making is not guaranteed to lead to morally good behavior, this mode of decision-making would not guarantee morally upstanding AI. But it may just be a step in the right direction.

advertisement
More from Benjamin Mitchell-Yellin, Ph.D.
More from Psychology Today
More from Benjamin Mitchell-Yellin, Ph.D.
More from Psychology Today