Digital Transformation

Finding the balance between efficiency and empathy with AI in CX

By Lindsay Fifield, Director of Customer Success at Forethought

Customers are happiest when they feel understood. They have confidence that humans can do that — 70% of consumers believe human customer service agents are more likely to help if they can relate to their emotions. While generative AI has proven capable of using empathetic language, the rise of agentic AI poses a new challenge: the actions AI takes must be just as empathetic as its language.

Agentic AI has the potential to completely transform customer service by dramatically improving efficiency, handling repetitive tasks, and resolving issues at scale. But, this efficiency risks losing the human touch that builds trust and retention. Humans need to oversee agentic AI to be sure it acts with empathy, and these practical strategies will show you how.

What’s the potential for empathetic AI systems?

Agentic AI systems can plan and execute customer support actions without human intervention. This includes adjusting delivery schedules, issuing refunds, and resolving technical issues. It transforms support systems into tools that respond to customers and address their concerns.

The potential for agentic AI is massive. Gartner predicts that by 2028, at least 15% of day-to-day work decisions will be made autonomously through agentic AI, compared to virtually none today. This shift will redefine the role of the customer service agent from handling routine inquiries to focusing on high-value interactions.

If these systems can act with empathy, the real power of agentic AI lies in its scale. If every customer gets an immediate response where their needs are anticipated, businesses can scale with much less overhead.

Scaling AI comes with challenges, including keeping data private, complying with regulatory standards, and ensuring ethical transparency. Perhaps most importantly, businesses must prioritize ensuring AI acts with empathy in its decision-making.

What happens when AI isn’t empathetic?

Customer service can profoundly impact people’s lives. From healthcare claims to travel emergencies, the way support systems respond in critical moments can impact a customer’s well-being. When AI lacks empathy, it doesn’t just risk retention—it risks failing the people it serves.

According to a survey by Colliers, 61.3% of U.S. adults identified a lack of empathy as their top concern about using AI in customer service. That’s because without empathy, AI decisions can feel cold and rigid, even when technically correct.

A healthcare AI system might quickly deny a claim for life-saving surgery because it doesn’t strictly adhere to policy without considering the emotional weight of the situation. A travel AI system might decline a flight change request during a family emergency because it doesn’t meet fare rules. A utility AI system might automate service shutoffs for missed payments in an area where there was a natural disaster.

The key to avoiding these traps is proper training of humans and Agentic AI systems. Responses and actions should be based on real-time interactions and in tandem with human oversight, especially in sensitive scenarios.

How to make sure your Agentic AI systems are empathetic

Making agentic AI empathetic takes careful planning. While the technology is great at streamlining tasks and resolving issues, it needs to learn the human side of customer interactions. From segmenting intents to providing thoughtful handoffs, these steps can help keep AI effective without losing empathy.

1. Segment AI system responses by customer intent

Using keywords, you can train an agentic AI system to act based on customer intent. This will help the system understand the context behind a message and act based on the emotional weight of the situation.

Intents define specific goals, like “emergency” or “claim denial.” They train with diverse phrases to reflect how customers naturally express their needs. That same patient who was denied cancer treatment wouldn’t receive a generic response like “Your claim was denied due to [reason].” Instead, a well-designed intent would provide an empathetic reply: “I understand how important this treatment is for you. Let me review the specifics of your claim and explore the next steps together.” If AI couldn’t resolve the issue, the system would escalate to human agents to ensure the customer feels supported.

To set up and train intents effectively, businesses should follow these best practices:

  • Define clear goals for each intent: Establish what AI needs to achieve for each query type.
  • Identify common user phrases: Gather diverse examples of how customers express these intents, including variations in tone and language.
  • Use diverse training data: Train the AI to recognize variations in phrasing and context.
  • Test and refine continually: Evaluate the AI’s performance regularly to address gaps.
  • Review and update intents: Continuously train with new use cases or changes in customer behavior to keep the system effective.

2. Offer partial handoff to human agents

Sometimes, interactions just need a human touch to be truly empathetic. BetterUp uses Forethought’s intents to determine which cases are too sensitive for AI to handle.

If someone contacts customer support and indicates they’re having a mental health crisis, the system immediately escalates the issue to a human agent for careful handling. On the other hand, the system deflects routine inquiries, like scheduling sessions, connecting with a new coach, or reporting platform issues, leaving the agents to focus on more sensitive interactions.

As Zander Grant, Support Operations Director at BetterUp, put it, “We’ve been able to automate and deflect these routine tickets, leaving the agents to focus on interactions that require more personalized attention”

3. Measure with empathy in mind

If you only measure metrics like the number of tickets closed or agent speed, you’re not prioritizing empathy. Customers want fast resolutions, but they also expect personalized and thoughtful interactions. Tracking deflection rates alongside CSAT scores helps ensure that AI delivers both efficiency and empathy.

BetterUp intentionally balances metrics like CSAT with productivity by regularly asking themselves if their measurement strategy works. Grant explained, “There’s a delicate balance between quality and quantity, and we’re intentional about finding it by regularly reviewing whether our goals reflect our values.”

A high deflection and a low CSAT might signal a lack of personalization within your AI system. For example, routine intents like “forgot password” typically just require speed, while sensitive intents like “claim denial” demand more empathy and clear escalation paths to meet customer expectations.

4. Keep customers informed

When customers know they’re interacting with AI, understand how their data is used, and feel assured that human support is available, they are more likely to feel confident in the experience. Without transparency, AI interactions can feel impersonal or untrustworthy and undermine your efforts to be empathetic.

Start with clear disclaimers or banners, such as “You are chatting with an AI assistant,” to set expectations and gather consent. Educational resources, like FAQs or web pages that explain how your AI systems work, the benefits they provide, and how they complement human agents, may help customers feel informed.

You can take this a step further with real-time updates during interactions, like “I’m retrieving your claim details,” to help customers feel informed. You may also want to offer a “Speak with an agent now” option to reassure customers they’re not stuck in an automated loop if their issue gets too complicated.

Lastly, feedback is just as important. Actively asking customers about their AI interactions lets you spot areas where they feel unsupported.

5. Let human agents impact training

Human expertise is what makes AI genuinely effective. While agentic AI is pretty independent, its guidance from human agents shapes how well the technology performs.

Use your human agents to help identify specific issues and provide actionable feedback, like:

  • Flagging misclassified intents: If a customer asks for a “refund on a damaged item,” and the AI incorrectly identifies the intent as “product replacement,” the agent could flag this error. They might add additional training phrases, like “refund for broken item” or “refund for defective product,” to refine the AI’s ability to classify similar queries correctly.
  • Improving tone and language: If the AI responds with, “Your claim was denied due to policy restrictions,” an agent might rewrite this to be more empathetic: “I understand how frustrating this must be. Let me explain why your claim was denied and walk you through your options.” This revised response can then be added to the system’s training to improve future interactions.
  • Updating escalation rules: If agents notice customers frequently express frustration when the AI tries to resolve complex billing issues, they can suggest adjusting workflows to escalate billing-related queries earlier in the process.
  • Stress-testing workflows: Agents can simulate edge cases—like vague or emotionally charged queries—to see how the AI responds. Based on gaps identified in testing, they might recommend adding more diverse examples to training data.

This shift often requires new skills for human agents, but it is worth it to ensure customers receive the empathetic interactions they deserve. Agents must understand how agentic AI works, its capabilities and limitations, and how to integrate it into customer interactions.

Genuine empathy can only exist with human oversight

Agentic AI can revolutionize customer service but isn’t a set-it-and-forget-it solution. Without human oversight, even the most advanced AI systems risk making decisions that feel cold, impersonal, or outright harmful to customers.

Empathy isn’t just about language; it’s about actions that reflect the context and emotional weight of a customer’s situation. AI alone can’t fully grasp that—it needs humans to guide and refine it. That takes intentional design, constant monitoring, and a willingness to adapt.

This means businesses must invest in systems where humans and AI work together. Human agents aren’t just there to handle escalations; they play a critical role in shaping how AI systems learn, evolve, and respond. Agentic AI is only as empathetic as the humans who train and oversee it. Keeping a close eye on it ensures it acts as a valid extension of your team, not a cold, automated barrier.

Author

Related Articles

Back to top button