Skip to main content

Verified by Psychology Today

Artificial Intelligence

Chatbots Could Start Shaping How We Trust and Who We Trust

Like other technology, generative AI effects on our values may be profound.

Key points

  • Trust is a keystone of social morality; it signals respect when a person trusts someone else.
  • People find humans "trustworthy," but machines are "reliable." What happens if chatbots outperform humans?
  • As we integrate generative AI into our lives, trusting humans "may be seen as unnecessary and discardable."
Emiliano Vittoriosi / Unsplash
Source: Emiliano Vittoriosi / Unsplash

Are chatbots changing social morality?

Changes in social morality refer to changes in what people believe to be good and bad or right and wrong. Among the many ethical questions raised by chatbot development and use is their potential to influence social morality.

We know that technologies such as robots, computer interfaces, and algorithmic systems can have various psychological effects on us. They can cause us to lower our guard, make us feel less alone, and lead us to make all kinds of assumptions about reliability and value. But they can also have deeper, more long-lasting psychological effects that we usually don’t think about.

We know that technology has served as a “mediator” of moral change in many different ways, as Peter Verbeek (2012, 2013) and others have detailed. The smartphone is a common example. Its proliferation has shifted the value of our everyday experiences: Once, our social interactions with friends and colleagues were largely valuable in and of themselves, but now those interactions also have widely recognized instrumental value as content to be recorded, shared, and even monetized.

The presence of the smartphone has, in many ways, disrupted shared moral norms and expectations that previously defined everyday activities. “The technology has enabled this reinterpretation of the moral value of everyday experiences,” as some theorists describe it (Danaher & Sætra, 2022, p. 35).

Are chatbots poised to shift social morality in subtle but significant ways? Very possibly.

Consider the concept of trust. Can we only trust other humans? We typically judge machines based on reliability, but what if we decide that a machine—a chatbot—is trustworthy? Dozens of books and articles have been written on how trust functions in a moral system.

Most recently, in 2022 (before the rollout of OpenAI’s ChatGPT), two technology theorists, John Danaher and Skaug Sætra, addressed the issue of how technology can affect our understanding of trust and its function in our social lives. Trust, they wrote, “is the keystone in our broader value system by facilitating productive cooperation and coordination. If we can trust others, we can enhance our autonomy, happiness, mental well-being, health, relationships, and so on….Trust is a way of signaling respect to another person. If we trust someone, we are respecting their honesty, their competence, and their status as a co-equal moral citizen” (2022, p. 35).

They identify several ways in which technology can influence or even undermine trust and the important role it plays in our social lives. Drawing from their analysis, we should all consider how chatbot development and use may, even now, be influencing our sense of trust.

  • Chatbots may be exploiting our tendency to over-trust machines with social features. With its conversational interface, we are invited to assign more competence and veracity than generative AI models so far deserve and rely on them with insufficient justification (i.e., Sundar et al., 2015). We make assumptions about their intelligence, autonomy, and even their capacity for creativity. This is especially likely when we wrongly assume that they work like a Google search when, in reality, they are not designed for facticity retrieval but instead are word-sequence probability systems. This is why errors and misinformation remain common.
  • Chatbots could disrupt longstanding patterns of interpersonal trusting relationships because, in many ways, AI is perceived as more capable than humans at providing valuable information quickly and efficiently. This could result in a “redistribution” of trust away from human interlocutors if AI systems are widely perceived as more reliable. “We have always known that humans are fallible, but there is a difference between being the best there is, but fallible, and simply being the best human, when machines exceed our capabilities,” Danaher and Sætra write.
  • We typically judge machines based on the notion of reliability and generally reserve trust for human relationships. But as we integrate generative AI tools more deeply into our everyday lives, we may end up marginalizing the value of trust itself. “Rather than trust being seen as an essential or core instrumental social value (the glue that binds together cooperative relations), it may be seen as unnecessary and discardable,” Danaher and Sætra write (p. 47). This, in turn, could begin eroding respect for others since trusting someone is an elemental gesture of respect.
  • Increasing reliance on chatbots could invite a “robotomorphic” effect by subtly influencing our moral perception of others. Whereas anthropomorphism refers to how we assign human attributes to animals and things, robotomorphy refers to how we can attribute robot qualities to human beings. It has a long history that includes the claims of Thomas Hobbes; today, neural network theorists liken the human brain to a computer. But the more such metaphors take root—the more we see ourselves as machine-like—the less valuable the concept of trust becomes. “Increased robotomorphy might…change trust in human beings into something more akin to a question of whether or not we can rely on each other just as we rely on a car or a dishwasher,” Danaher and Sætra write (p. 47-48).

References

Danaher, J., & Sætra, H.S. (2022). Technology and moral change: The transformation of truth and trust. Ethics and Information Technology 24. https://doi.org/10.1007/s10676-022-09661-y

Sundar, S.S., Jia, H., Waddell, T.F., & Huang, Y. (2015). Toward a theory of interactive media effects (TIME): Four models for explaining how interface features affect user psychology. In The handbook of the psychology of communication technology (S.S. Sundar, Ed.). Malden, MA: Wiley Blackwell, pp. 47-87.

Verbeek, P.P. (2012). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press.

Verbeek, P.P. (2013). The moral status of technical artifacts. Philosophy of Engineering and Technology 155, p. 1-9.

advertisement
More from Patrick L. Plaisance Ph.D.
More from Psychology Today