Future of AIAI

Empathy in the Age of AI: Keeping Human Connection at the Heart of Patient Care

By Dr. Kate Eisenberg MD, PhD, FAAFP, Senior Medical Director, DynaMed Decisions

One of the biggest shifts in my career as a primary care physician hasn’t come from a new drug or a breakthrough surgical technique, but instead, from the evolution of information technology and data. My first few weeks of medical school in the mid-2000s were a workshop on how to manage a high volume of medical information in practice – a topic deemed so important it had to be presented to us before setting foot in anatomy lab, that rite of passage for first-year medical students.  

Today, the challenge has only accelerated: an overwhelming amount of data, technology, evidence, and guidelines are both supporting and intruding on our experience. The healthcare industry generates approximately 30% of the world’s data volume. As a result, clinicians are often forced into difficult choices about how to spend our limited time. How do we carefully review each chart, document every encounter, and communicate with staff and colleagues, while still finding the presence to fully hear a patient’s history, understand their concerns, and provide thorough counseling?

As artificial intelligence (AI) tools promise to streamline workflows and provide real-time decision support, the real question is whether these tools will ultimately support our connection with the person sitting in front of us. True clinical excellence requires both evidence-based precision and authentic human connection.  

At the same time, patients are now seeking answers online – including from AI chatbots – some of which can deliver seemingly accurate information with endless patience. In this rapidly evolving relationship between clinicians, patients, technology, and data, it is critical to remember that both we and our patients are people, and our experience in delivering and receiving healthcare is still primary, whatever tools we use to accomplish that goal.

Why Human Connection Still Matters

While our culture may revere Dr. House-type physicians who are brilliant but cantankerous, in reality, clinicians shouldn’t have to choose between knowledge and compassion. The most successful care happens when the two sides merge. Technology can help us gather and process data to support our decision-making. It can also support our communication, but it cannot replace the trust and respect that grow only over years of a relationship.

Even the most accurate diagnosis or treatment plan is not helpful if a patient can’t or won’t follow through with it. A patient’s values, education, and daily struggles shape whether they’ll adhere to a physician’s recommendations. I did not understand the power of this relationship until I had been practicing for a number of years. Patients would start to simply take my advice unquestioned, even for interventions like influenza vaccines or cancer screenings they may have declined in the past. I’d often be told, “I’ll do it if you say so, doc.”

That degree of trust and respect can only come over time, when a clinician listens and learns about a patient’s daily realities – their work schedule, financial constraints, and family responsibilities – and tailors a care plan designed for that individual. For example, telling a patient with a demanding job and two young children to “rest more” is an empty recommendation if we don’t also help think through how to make that realistic.

Barriers to Empathy in the Digital Age

Balancing AI usage and human connection is not without its challenges. The most immediate barrier to empathy is time pressure. Today’s clinicians are under immense stressors: limited budgets, increased workloads, and longer hours. This directly impacts their ability to focus on, and humanize, the patient in front of them. Combining this with a sea of information from electronic health records, lab reports, and evidence can make it difficult to personalize care, let alone apply the latest evidence and extract key details from the record at the same time, and then repeat that across 20 or 30 patients a day.

The rise of misinformation presents a further time pressure for clinicians, and a further challenge trying to differentiate credible sources from bad advice. Patients now bring a list of questions from “Dr. AI” to their appointments, oftentimes fueled by fear or mistrust. While this may well support people’s understanding of their care in some cases, correcting misinformation without dismissing patient concerns requires time and empathy, both already in short supply.

The other challenge fueled by the growth in AI-based tools is an additional wedge between those who have access to the most modern, evidence-based care, and those who do not. When patients struggle to find primary care providers in their area, this pushes them to rely on digital tools and online forums for advice. Furthermore, communicating with patients across diverse educational and socioeconomic backgrounds requires immense adaptability and clarity, as using medical jargon can be a major blocker to achieving patient goals.

In other words, the barriers of time pressure, misinformation, and inequity may look different with more AI tools in play, but they all lead back to the same core question: “How do we keep the patient at the center?”

Solutions: Using Emotional Intelligence to Deliver Impactful Patient Care

Providing impactful care in the AI era starts with clinicians having a high level of emotional intelligence and awareness. One main way to practice this is by aligning any recommendations or advice with a patient’s lived experiences. Education goes beyond just delivering information or clinical data; it needs to resonate with the patient’s priorities, values, and lived reality.

For example, I once had a diabetic patient who was doing his best to take care of himself but had a persistently high hemoglobin A1c, which measures the amount of glucose (sugar) in the blood. During a visit where I spent time fully focused on his nutrition, he had an epiphany about potatoes contributing to high sugar levels. He came back to his next visit with his diabetes under good control.  

 

Taking the time to make him feel heard and respected, while providing personalized education, ended up doing more for him than any medication or test had done. That moment reminded me: empathy plus evidence leads to trust, motivation, and adherence.

Prioritizing clear, jargon-free communication is also key for improving access and equity, particularly given widely varying levels of health literacy among the patients we serve. A colleague of mine often tells his medical students to think of a conversation with a patient as building a bridge. You begin on their side with what they already know, and then carefully guide them across to new information.

In a similar vein, health equity must be proactively addressed within AI tools too. An AI system’s potential blind spots need to be considered by asking, “Who is missing here?” This means integrating equity checks into how AI models are built and applied through multiple layers of iterative testing and a human-in-the-loop (HITL) approach.

A HITL model uses human oversight throughout the lifecycle of AI systems, from dataset development to deployment monitoring. In the case of health equity, this starts with making sure diverse populations are represented in training models and extends to considering ways to assess for and mitigate bias in the tools built on top of AI models. Humans can also provide what a machine lacks, such as context, nuance, and ethical judgment, to help validate or override automated decisions that aren’t fair or inclusive.

Finally, clinicians must build trust in two ways with patients. Information brought from general-purpose AI tools into the exam room needs to be fully and openly discussed, and as appropriate, either applied or not, without invalidating the patient’s concerns. Additionally, given research shows that patients trust physicians who use AI tools less, transparently explaining how vetted, evidence-backed AI tools are used in their care as part of a comprehensive care plan is also key to building trust, as both clinicians and patients navigate applying these tools in practice.

Balancing Empathy and Innovation

Using AI isn’t about replacing human connection with technology. In fact, bringing humanity to the bedside is needed now more than ever in this age of information overload. What AI can do is help automate tedious work and empower clinicians to make evidence-backed decisions, more often by providing ready access to high-quality, synthesized information. The result is more time to focus on what matters most: the patient.

While AI has the potential to accelerate and improve decision-making, it can’t sit with a patient as they learn bad news, or genuinely care about their family, or bring lived experience to a patient encounter.

The future of medicine depends on refusing the false choice between empathy and innovation. We need both woven tightly together to keep care human.
 

Author

Related Articles

Back to top button