Skip to main content

Verified by Psychology Today

Artificial Intelligence

The Impact of Artificial Intelligence on Our Encounters

Technologies like AI don’t just do things for us; they do things to us.

Key points

  • When technologies enter social life, they can transform our relations and how we think about ourselves.
  • We are already using AI in ways that change how we encounter others.
  • The crises we face invite us to further outsource our relations to chatbots offering “bonds.”

On February 28, I participated in a seminar on a new book, Encountering Artificial Intelligence,1 exploring the ethics of artificial intelligence (AI) from within the Catholic intellectual tradition. I was not involved with the book but asked to be a respondent to two of the contributors and offer some reflections on evaluating the impact of AI on our encounters. The following is drawn from my remarks.

When we think about certain technological advances or new uses of technology, we should distinguish between two kinds of cautionary arguments against their possible harmful effects. The first relates to the direct impacts of a technology, whether on the environment or human health or the social order. These might include the depletion of resources, pollution, industrial accidents, the side effects of drugs, or, in the case of artificial intelligence, uncertainties about impartiality, transparency, reliability, or privacy.

Such impacts are the most urgent because they bring immediate harms and costs. And, because the effects are so clear and specific, they can be more directly addressed by law, regulation, policy, and other interventions.

The second kind of cautionary argument addresses transformations in the nature of human life—changes that may occur quite apart from the direct impacts identified by the first kind of cautionary argument, and even in their absence. These potential dangers are not as specific or measurable as the more direct ones and, therefore, are more subject to speculation and debate. At issue is how technologies enter social life, how they can transform our relations with others, and the way we feel and think about ourselves. Do they incline us, we ask, to see ourselves in the reduced image of our new technologies, as machine-like creatures subject to mechanical regulation?

Much of what Encountering Artificial Intelligence addresses are concerns with the effects of AI on us as persons and families, and on our relationships and practices. The contributors are working against the still-common idea that digital technologies are “just tools.” We know they are more than that. As a now large body of evidence shows, technologies like AI don’t just do things for us. They do things to us.2

Evocative Objects

In her book, The Second Self, the sociologist/psychologist Sherry Turkle began referring to computers and certain software programs as “evocative objects,” objects that provoke self-reflection. People from all walks of life, she argued, are confronted with machines whose behavior and mode of operation invites psychological interpretation and that, at the same time, incites them to think differently about human thought, memory, and understanding. “Even children,” she observed, “playing with the first generation of computer toys and games were asking new questions about the machine’s ‘life’ and ‘mind’ and then, by extension, wondering what was special about their own.”

Turkle demonstrated the interaction effect, or feedback loop, between these evocative technologies and our self-understanding and conduct of life. The arrow of influence also points in the other direction. New technologies always come into an existing institutional and cultural order, and this social context influences the way these technologies are received and the uses to which they are put.

A reading of our current situation is critical to the concerns raised by Encountering Artificial Intelligence. The worry is not just with AI or its design or its immediate impacts. The larger apprehension is with how our way of life is fostering potentially problematic and unethical applications of the technology. The future is hard to predict, of course, and things rarely go the way we imagine they will. But we are already socially, emotionally, and intellectually predisposed to put AI to particular and troubling uses.

Encounter

Photo by Andy Kelly on Unsplash
Photo by Andy Kelly on Unsplash

Among the many ways in which the difference between persons and machines is being blurred, I will touch briefly on two: care for others and matters of the heart.

Our care crisis: Our care systems are stretched—health care, childcare, mental health care, care for the elderly. We are living in a time of growing economic uncertainty, declining birth rates, increased life expectancy, increased women’s labor force participation, and a deepening mental health crisis among the young and old. More people now need or seek care, and their numbers are steadily rising.

Under these conditions, we are prone to look for what the sociologist Emma Dowling calls a “care fix.”3 We are tempted to outsource care to apps, robots, and chatbots. Tremendous resources, for example, are being poured into the development of sophisticated “sociable robots” that can provide “companionship” and “care.” According to a recent review of the literature, “In healthcare, social robots … can positively interact with the disabled, children, and the elderly, reducing the workload of nurses, physicians, and caregivers. Because of their artificial intelligence, they have a high level of interactivity,” equipped with algorithms for speech, facial, and emotion recognition, among other capacities.4

As we begin to employ machines for this care work, we start to revise our understanding of what care means. In mental health care, for instance, there has been an explosion of apps and chatbots such as Woebot and Wysa, all promising an “emotional bond” that “is as deep as that with a human therapist.” Using the technology inclines us to accept this proposed equivalence, easing our reservations and deepening our commitment to the fix.

Our relationship crisis: As Robert Putnam and others have convincingly demonstrated, personal connections between individuals have thinned and deteriorated. People socialize less and have abandoned face-to-face communities of every kind, from bowling leagues and parent-teacher associations to churches and volunteer groups. Friendships are down, estrangements are up, and relations between men and women are badly frayed. People are struggling to form any satisfying relationships.

Into this mess comes AI, and chatbot “companions.” Applications such as Replika, Mitsuku, and Cleverbot are being offered as “friends” and “allies” to provide “empathy” and “emotional connection.” Replika, which boasts more than 10 million users, offers an “AI companion who cares” and is “always here to listen and talk.” Even better than real people, who are prone to make judgments, Replika is “always on your side.” And trendlines indicate the growing use of Replika and similar chatbots, such as Kupid.AI and Romantic AI, for romantic and sexual relationships.

It is perhaps a small step to move from such resistance-free, self-referencing “bonds” to a new standard against which we measure any real relationship.

Resisting Our Deflation

Half a century ago, in his book, The Insecurity of Freedom, theologian and philosopher Rabbi Abraham Joshua Heschel argued that “We have been guilty of underestimating the mind and soul of man. We must restore him [the human person] to his true stature.” He continued: “the task of counteracting the deflation of man and the trivialization of human existence is incumbent upon every [one]. But it is the duty of every teacher to teach and to live the claim that every [person] is capable of genuine love and compassion, of discipline and universality of judgment, of moral and spiritual exaltation.”

That, in brief, is the duty to which the contributors to Encountering Artificial Intelligence are responding.

References

1. AI Research Group of the Centre for Digital Culture. Encountering Artificial Intelligence: Ethical and Anthropological Investigations. 2023. Journal of Moral Theology 1 (Theological Investigations of AI): i–262. https://doi.org/10.55476/001c.91230.

2. See Sherry Turkle, Life on the Screen. New York: Simon & Schuster, 1995.

3. Emma Dowling, The Care Crisis: What Caused It and How Can We End It? London/New York: Verso, 2021.

4. Ragno, Luca, Alberto Borboni, Federica Vannetti, Cinzia Amici, and Nicoletta Cusano. 2023. “Application of Social Robots in Healthcare: Review on Characteristics, Requirements, Technical Solutions.” Sensors 23, no. 15: 6820. https://doi.org/10.3390/s23156820.

advertisement
More from Joseph E. Davis Ph.D.
More from Psychology Today