AI Surpasses Doctors in Empathy – We’ve Made Physicians Act Like Robots

AI Surpasses Doctors in Empathy – We’ve Made Physicians Act Like Robots

Artificial intelligence has mastered chess, art and medical diagnosis. Now it’s apparently beating doctors at something we thought was uniquely human: empathy.

A recent review published in the British Medical Bulletin analysed 15 studies comparing AI-written responses with those from human healthcare professionals. Blinded researchers then rated these responses for empathy using validated assessment tools. The results were startling: AI responses were rated as more empathic in 13 out of 15 studies – 87% of the time.

Before we surrender healthcare’s human touch to our new robot overlords, we need to examine what’s really happening here.

The studies compared written responses rather than face-to-face interactions, giving AI a structural advantage: no vocal tone to misread, no body language to interpret, and unlimited time to craft perfect responses.

Critically, none of these studies measured harms. They assessed whether AI responses sounded empathic, not whether they led to better outcomes or caused damage through misunderstood context, missed warning signs, or inappropriate advice.

Yet even accounting for these limitations, the signal was strong. And the technology is improving daily – “carebots” are becoming increasingly lifelike and sophisticated.

Beyond methodological concerns, there’s a simpler explanation: many doctors admit that their empathy declines over time, and patient ratings of healthcare professionals’ empathy vary greatly.

Inquiries into fatal healthcare tragedies – from Mid Staffordshire NHS Foundation Trust to various patient safety reviews – have explicitly named lack of empathy from healthcare professionals as contributing to avoidable harm. But here’s the real issue: we’ve created a system that makes empathy nearly impossible.

Doctors spend about a third of their time on paperwork and electronic health records. Doctors must also follow pre-defined protocols and procedures. While the documentation and protocols have some benefits, they have arguably had the unintended consequence of forcing the doctors to play the bot game. Therefore, we shouldn’t be surprised when the bot wins.

The burnout crisis makes this worse. Globally, at least a third of GPs report burnout – exceeding 60% in some specialties. Burned-out doctors struggle to maintain empathy. It’s not a moral failing; it’s a physiological reality. Chronic stress depletes the emotional reserves required for genuine empathy.

The wonder isn’t that AI appears more empathic; it’s that human healthcare professionals manage any empathy at all.

Doctor’s empathy declines over time.
Stephen Barnes/Shutterstock.com

What AI will never replicate

No carebot, however sophisticated, can truly replicate certain dimensions of human care.

A bot cannot hold a frightened child’s hand during a painful procedure and make them feel safe through physical presence. It cannot read unspoken distress in a teenager’s body language when they’re too embarrassed to voice their real concern. It cannot draw on cultural experience to understand why a patient might be reluctant to accept certain treatment.

AI cannot sit in silence with a dying patient when words fail. It cannot share a moment of dark humour that breaks the tension. It cannot exercise the moral judgment required when clinical guidelines conflict with a patient’s values.

These aren’t minor additions to healthcare; they’re often what make care effective, healing possible and medicine humane.

Here’s the tragic irony: AI threatens to take over precisely those aspects of care that humans do better, while humans remain trapped doing tasks computers should handle.

We’re heading toward a world where AI provides the “empathy” while exhausted humans manage technical work – exactly backward. This requires three fundamental changes.

First, we must train doctors to be consistently excellent at empathic communication. This cannot be a brief module in medical school. It needs to be central to healthcare education. Since AI already matches humans in many technical skills, this should free doctors to focus on genuine human connection.

Second, redesign healthcare systems to protect the conditions necessary for empathy. Dramatically reduce administrative burden through better technology (ironically, AI could help here), ensure adequate consultation time, and address burnout through systemic change rather than resilience training.

Third, rigorously measure both benefits and harms of AI in healthcare interactions. We need research on actual patient outcomes, missed diagnoses, inappropriate advice, and long-term effects on the therapeutic relationship – not just whether responses sound empathic to raters.

The empathy crisis in healthcare isn’t caused by insufficient technology. It’s caused by systems that prevent humans from being human. AI appearing more empathic than doctors is a symptom, not the disease.

We can use AI to handle administrative tasks and free doctors’ time and mental space, and even provide tips to help healthcare professionals boost their empathy. Or we can use it to replace the human connection that remains healthcare’s greatest strength.

The technology will continue advancing, regardless. The question is whether we’ll use it to support human empathy or substitute for it – whether we’ll fix the system that broke our healthcare workers or simply replace them with machines that were never broken to begin with.

The choice is ours, but the window is closing fast.

The post “AI is beating doctors at empathy – because we’ve turned doctors into robots” by Jeremy Howick, Professor and Director of the Stoneygate Centre for Excellence in Empathic Healthcare, University of Leicester was published on 11/07/2025 by theconversation.com