Skip to main content

Verified by Psychology Today

Artificial Intelligence

Can Artificial Intelligence Increase Our Morality?

Just as we define our technologies, they define us.

May Hill Design with support from Templeton World Charity Foundation and Now You Know Media
Source: May Hill Design with support from Templeton World Charity Foundation and Now You Know Media

In discussions of AI ethics, there’s a lot of talk of designing “ethical” algorithms, those that produce behaviors we like. People have called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans’ morality, our own capacity to behave virtuously?

That’s the subject of a talk on “AI and Moral Self-Cultivation” given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on “Character, Social Connections and Flourishing in the 21st Century,” hosted by Templeton World Charity Foundation, in Nassau, The Bahamas. (Full disclosure: I was the invited respondent for Vallor’s talk, providing commentary and facilitating discussion, and TWCF paid for my travel.)

Vallor painted a suspect picture of technology as it stands, noting several ways in which algorithms degrade our morality. Russian bots disrupt civil discourse online. YouTube recommendations facilitate our compunction to click on extremist content. Video games goad us into continued playing, feeding addictive behavior. Even well-meaning AI applications have potential dark sides, she said. Algorithms aimed at putting at-risk students back on track could conceivably increase conformity. Therapy apps that give points for good behavior might make personal growth feel like a badge-harvesting grind. Social credit systems like that in China, or even more subtle systems of nudging, could make virtue feel inauthentic.

Vallor noted a few successful efforts to temper our worse impulses. Some platform filter harmful content, and some phones lock people out after extended screen time. But she labeled these “remedial efforts,” meant to limit harms but not generate new benefits. Moreover, she pointed to three reasons morality-enhancing tech hasn’t been a priority for Silicon Valley: There’s no clear profit motive, modifying our behavior can seem paternalistic, and even deciding which behavior to encourage can stifle pluralism.

But Vallor held out some hope. “Are we really stuck between the Scylla of a digital Wild West, and the Charybdis of surrender to Orwellian digital overlords?” she asked. “I don’t see why we must be.” Here she cited the humanizing force of Fred Rogers, imagining the companionship of a virtual Mr. Rogers, or at least the types of apps he would have designed. “AI systems could invite us to reflect privately upon the sort of person we think we are or want to be,” she said, “and then offer ways in which we might steer our actual choices more effectively in that desired direction.” Of course, she cautioned, even a virtual Mr. Rogers would not be immune to the issues of fairness, accountability, and transparency that attend almost every other AI system.

As I said at the meeting, I found Vallor’s talk wise, insightful and beautifully written. I went on to mention a few near-term AI systems that enhance human cooperation, or at least coordination. Nicholas Christakis (who presented later at the meeting) has shown that interspersing bots in social networks can help people solve puzzles that require coordination. In recent research, autonomous vehicles learned to reduce surrounding traffic in a simulation—and could perhaps reduce road rage in reality. When Twitter bots call out racists, the racists use fewer slurs; Intel is similarly teaching AI to call out hate speech in Reddit forums. And researchers have developed a reinforcement learning algorithm that’s better than people at eliciting cooperation from human partners in iterated prisoner’s dilemma (as long as they think it’s a person).

And while there aren’t many apps that target morality specifically, moral development can result from broader interventions. Therapy apps improve users’ mental health, and when we’re well we can focus on being good. AI could also help people dedicate more facetime to each other by automating paperwork and other rote tasks. And social robots have been known to improve the ability of autistic children and trauma survivors to open up to other people.

For sure, designing technologies to encourage ethical behavior raises the question of which behaviors are ethical. Vallor noted that paternalism can preclude pluralism, but just to play devil’s advocate I raised the argument for pluralism up a level and noted that some people support paternalism. Most in the room were from WEIRD cultures—Western, educated, industrialized, rich, democratic—and so China’s social credit system feels Orwellian, but many in China don’t mind it.

The biggest question in my mind after Vallor’s talk was about the balance between self-cultivation and situation-shaping. Good behavior results from both character and context. To what degree should we focus on helping people develop a moral compass and fortitude, and to what degree should we focus on nudges and social platforms that make morality easy?

The two approaches can also interact in interesting ways. Occasionally extrinsic rewards crowd out intrinsic drives: If you earn points for good deeds, you come to expect them and don’t value goodness for its own sake. Sometimes, however, good deeds perform a self-signaling function, in which you see them as a sign of character. You then perform more good deeds to remain consistent. Induced cooperation might also act as a social scaffolding for bridges of trust that can later stand on their own. It could lead to new setpoints of collective behavior, self-sustaining habits of interaction.

There’s a lot to speculate about. What’s clear is that just as we define our technologies, they define us. All the more reason to think hard about where it’s going—and to involve psychologists and philosophers in the discussion.

advertisement
More from Matthew Hutson
More from Psychology Today
More from Matthew Hutson
More from Psychology Today