Saturday, March 01, 2025

Talk to the Hand, er, Chatbot

(Greg Clarke/WSJ)
Artificial intelligence "chatbots" have advanced to the point where human beings find them more empathetic--and prefer talking to them--over human strangers. [bold added]
A 2023 study in JAMA Internal Medicine found that patients with a medical concern preferred a chatbot’s response to a physician’s nearly 80% of the time. Another study published in the journal Communications Psychology this year found that people consistently found a chatbot more compassionate than trained hotline crisis responders.

Large language models (LLMs) are doing a better job than humans at making people feel seen and heard. This phenomenon, which we can call LLMpathy, is both stunning and controversial...

When we see someone is in pain, or when someone we care about shares a problem, we instinctively want to help. We offer advice, suggest solutions and rattle off how we once dealt with something similar.

These impulses may be noble, even loving, but they aren’t as helpful as we might hope. Rushing to share opinions and hash out next steps can trivialize someone’s pain, and shifting the focus to yourself may unintentionally undermine their hope to be heard.

Chatbots avoid these pitfalls. With no personal experiences to share, no urgency to solve problems and no ego to protect, they focus entirely on the speaker. Their inherent limitations make them better listeners. More than humans, Bing paraphrased people’s struggles, acknowledged and justified how they might feel and asked follow-up questions—exactly the responses that studies show signal authentic, curious empathy among humans.

When people adopt similar strategies, their connections strengthen. Consider “looping for understanding,” a technique in which a listener repeats what someone else says in their own words, then asks if their summary is correct—“Do I have that right?” Chatbots are natural loopers. When humans are taught to do the same, they do a better job of understanding what the other person is feeling and helping them feel heard.
Any man who has been in a long-term relationship has already known this. When your partner brings you a problem, don't try to solve it, at least immediately. Nod in understanding, look concerned, and say something in agreement, like "Your boss should have given you credit and you should feel mad" and "your mother has always favored your brother."

Human beings have a chance when the conversation eventually moves on from hurt feelings to finding solutions:
It bears noting that the AI advantage in empathetic conversations has limits. Talk for long enough with ChatGPT and you’ll find it a friendly but formulaic partner. Its go-to recipe of “paraphrase, affirm, follow up” may feel warm and attentive the first time, but rote the second and annoying the third...

Research in this area typically asks people to interact with chatbots just once. It is possible that their edge over humans would disappear in longer chats, when its kindness grows repetitive and cloying.
Don't expect AI to stand still. If it knows that this is the third time we're talking about a subject, the chatbot may well ask if it can suggest a few solutions, again keeping ahead of human interlocutors.

Life partners, friends, and relations deteriorate over time, while AI gets better and better. Take (cold) comfort in something it will never do, however. It won't miss you when you're gone.