Skip to main content

Verified by Psychology Today

Artificial Intelligence

Finding Clinical Compassion in Large Language Models

AI's unique ability to craft "empathetic" language may be a powerful tool.

Key points

  • AI-generated replies can be perceived as compassionate, affecting patient care positively.
  • Structured compassion from AI may enhance patient engagement and adherence to treatment.
  • Human input can play a key role, adding personalization to AI’s compassionate communication.
Art: DALL-E/OpenAI
Source: Art: DALL-E/OpenAI

Compassion from an algorithm? Who would have thought? Yet, as large language models (LLMs) make their way into the practice of medicine, we're uncovering an unexpected twist: these AI-generated responses are being perceived as compassionate, and they might have a real-world impact on patient care.

A study published in JAMA Network Open explored how AI-generated replies may affect physicians' communications with patients. Involving 52 physicians who used AI-generated message drafts over several weeks, the study compared their behavior with a control group of 70 physicians who did not use the AI tool. Key metrics such as time spent reading and replying to messages were analyzed, revealing that the use of AI drafts was associated with a 21.8% increase in read time and a 17.9% increase in the length of replies. However, reply times did not change significantly. Physicians recognized the value of these drafts and suggested areas for improvement, highlighting the potential for AI to create a "compassionate starting point" in patient communications.

The Role of Compassion in AI

Despite its artificial origins, AI-generated compassion is becoming a meaningful element in healthcare communication. Physicians in the study found that using AI-generated drafts eased their cognitive load by offering a caring framework for their responses. This perception of compassion doesn’t stem from genuine emotion but rather from the careful use of language and structure. It challenges the notion that compassion must be inherently human, suggesting that supportive communication—through acknowledgment and empathetic phrasing—can be partially replicated by algorithms.

This finding raises an important question: If artificial compassion can evoke positive reactions and improve communication, does the “source” of that compassion truly matter? For patients, the content and tone of communication may carry more weight than whether it was generated by a human or AI.

From Physician Perception to Patient Engagement

Although the study focused primarily on physicians’ experiences, it hinted at the potential for AI-driven compassion to influence patient engagement. A well-crafted, empathetic message—human or AI-generated—can make patients feel understood, potentially increasing their willingness to follow treatment plans, attend follow-up appointments, or adopt lifestyle changes.

This suggests that AI’s structured compassion could catalyze improved patient outcomes. A patient receiving a detailed and warm message from their healthcare provider may be more inclined to adhere to medical advice, even if they know parts of the message were AI-generated. While this compassion is admittedly contrived, it still has the potential to drive real-world behavioral changes.

Artificial Empathy: More Than Just Words?

The irony here is striking: finding "compassion" in what we usually consider emotionless technology. When AI crafts responses using empathetic language, it seems to fulfill the elements of what patients and healthcare professionals perceive as empathy. This suggests that even artificial constructs can create meaningful interactions if they include key communicative elements.

However, the study also reveals that human input remains crucial. Physicians often adjusted the AI-generated drafts to better fit the patient's specific needs, adding an irreplaceable layer of personalization. This hybrid approach might be the key to achieving interactions that feel both supportive and authentic, even if the initial sense of compassion is artificially generated.

The Road Ahead: Compassionate AI and Patient Outcomes

The key takeaway here is that compassion, even in its artificial form, can serve as a useful tool in medicine. While AI doesn’t feel empathy, its ability to structure compassionate language could enhance patient-physician communication and potentially lead to improved patient engagement and health outcomes. Future research is needed to explore how patients perceive these AI-generated messages directly and to determine if this added empathy translates into measurable improvements in health behaviors.

Who would have thought? Compassion has found its way into the world of technology, and it just might transform patient care. In the end, it may not matter whether this compassion is human or artificial; what matters is the positive impact it can have on patients' lives.

advertisement
More from John Nosta
More from Psychology Today
More from John Nosta
More from Psychology Today