
Customers are happiest when they feel understood. They have confidence that humans can do that โ 70% of consumers believe human customer service agents are more likely to help if they can relate to their emotions. While generative AI has proven capable of using empathetic language, the rise of agentic AI poses a new challenge: the actions AI takes must be just as empathetic as its language.
Agentic AI has the potential to completely transform customer service by dramatically improving efficiency, handling repetitive tasks, and resolving issues at scale. But, this efficiency risks losing the human touch that builds trust and retention. Humans need to oversee agentic AI to be sure it acts with empathy, and these practical strategies will show you how.
Whatโs the potential for empathetic AI systems?
Agentic AI systems can plan and execute customer support actions without human intervention. This includes adjusting delivery schedules, issuing refunds, and resolving technical issues. It transforms support systems into tools that respond to customers and address their concerns.
The potential for agentic AI is massive. Gartner predicts that by 2028, at least 15% of day-to-day work decisions will be made autonomously through agentic AI, compared to virtually none today. This shift will redefine the role of the customer service agent from handling routine inquiries to focusing on high-value interactions.
If these systems can act with empathy, the real power of agentic AI lies in its scale. If every customer gets an immediate response where their needs are anticipated, businesses can scale with much less overhead.
Scaling AI comes with challenges, including keeping data private, complying with regulatory standards, and ensuring ethical transparency. Perhaps most importantly, businesses must prioritize ensuring AI acts with empathy in its decision-making.
What happens when AI isnโt empathetic?
Customer service can profoundly impact peopleโs lives. From healthcare claims to travel emergencies, the way support systems respond in critical moments can impact a customerโs well-being. When AI lacks empathy, it doesnโt just risk retentionโit risks failing the people it serves.
According to a survey by Colliers, 61.3% of U.S. adults identified a lack of empathy as their top concern about using AI in customer service. Thatโs because without empathy, AI decisions can feel cold and rigid, even when technically correct.
A healthcare AI system might quickly deny a claim for life-saving surgery because it doesnโt strictly adhere to policy without considering the emotional weight of the situation. A travel AI system might decline a flight change request during a family emergency because it doesnโt meet fare rules. A utility AI system might automate service shutoffs for missed payments in an area where there was a natural disaster.
The key to avoiding these traps is proper training of humans and Agentic AI systems. Responses and actions should be based on real-time interactions and in tandem with human oversight, especially in sensitive scenarios.
How to make sure your Agentic AI systems are empathetic
Making agentic AI empathetic takes careful planning. While the technology is great at streamlining tasks and resolving issues, it needs to learn the human side of customer interactions. From segmenting intents to providing thoughtful handoffs, these steps can help keep AI effective without losing empathy.
1. Segment AI system responses by customer intent
Using keywords, you can train an agentic AI system to act based on customer intent. This will help the system understand the context behind a message and act based on the emotional weight of the situation.
Intents define specific goals, like โemergencyโ or โclaim denial.โ They train with diverse phrases to reflect how customers naturally express their needs. That same patient who was denied cancer treatment wouldnโt receive a generic response like โYour claim was denied due to [reason].โ Instead, a well-designed intent would provide an empathetic reply: โI understand how important this treatment is for you. Let me review the specifics of your claim and explore the next steps together.โ If AI couldnโt resolve the issue, the system would escalate to human agents to ensure the customer feels supported.
To set up and train intents effectively, businesses should follow these best practices:
- Define clear goals for each intent: Establish what AI needs to achieve for each query type.
- Identify common user phrases: Gather diverse examples of how customers express these intents, including variations in tone and language.
- Use diverse training data: Train the AI to recognize variations in phrasing and context.
- Test and refine continually: Evaluate the AIโs performance regularly to address gaps.
- Review and update intents: Continuously train with new use cases or changes in customer behavior to keep the system effective.
2. Offer partial handoff to human agents
Sometimes, interactions just need a human touch to be truly empathetic. BetterUp uses Forethoughtโs intents to determine which cases are too sensitive for AI to handle.
If someone contacts customer support and indicates theyโre having a mental health crisis, the system immediately escalates the issue to a human agent for careful handling. On the other hand, the system deflects routine inquiries, like scheduling sessions, connecting with a new coach, or reporting platform issues, leaving the agents to focus on more sensitive interactions.
As Zander Grant, Support Operations Director at BetterUp, put it, โWeโve been able to automate and deflect these routine tickets, leaving the agents to focus on interactions that require more personalized attentionโ
3. Measure with empathy in mind
If you only measure metrics like the number of tickets closed or agent speed, youโre not prioritizing empathy. Customers want fast resolutions, but they also expect personalized and thoughtful interactions. Tracking deflection rates alongside CSAT scores helps ensure that AI delivers both efficiency and empathy.
BetterUp intentionally balances metrics like CSAT with productivity by regularly asking themselves if their measurement strategy works. Grant explained, โThereโs a delicate balance between quality and quantity, and weโre intentional about finding it by regularly reviewing whether our goals reflect our values.โ
A high deflection and a low CSAT might signal a lack of personalization within your AI system. For example, routine intents like โforgot passwordโ typically just require speed, while sensitive intents like โclaim denialโ demand more empathy and clear escalation paths to meet customer expectations.
4. Keep customers informed
When customers know theyโre interacting with AI, understand how their data is used, and feel assured that human support is available, they are more likely to feel confident in the experience. Without transparency, AI interactions can feel impersonal or untrustworthy and undermine your efforts to be empathetic.
Start with clear disclaimers or banners, such as โYou are chatting with an AI assistant,โ to set expectations and gather consent. Educational resources, like FAQs or web pages that explain how your AI systems work, the benefits they provide, and how they complement human agents, may help customers feel informed.
You can take this a step further with real-time updates during interactions, like โIโm retrieving your claim details,โ to help customers feel informed. You may also want to offer a โSpeak with an agent nowโ option to reassure customers theyโre not stuck in an automated loop if their issue gets too complicated.
Lastly, feedback is just as important. Actively asking customers about their AI interactions lets you spot areas where they feel unsupported.
5. Let human agents impact training
Human expertise is what makes AI genuinely effective. While agentic AI is pretty independent, its guidance from human agents shapes how well the technology performs.
Use your human agents to help identify specific issues and provide actionable feedback, like:
- Flagging misclassified intents: If a customer asks for a โrefund on a damaged item,โ and the AI incorrectly identifies the intent as โproduct replacement,โ the agent could flag this error. They might add additional training phrases, like โrefund for broken itemโ or โrefund for defective product,โ to refine the AIโs ability to classify similar queries correctly.
- Improving tone and language: If the AI responds with, โYour claim was denied due to policy restrictions,โ an agent might rewrite this to be more empathetic: โI understand how frustrating this must be. Let me explain why your claim was denied and walk you through your options.โ This revised response can then be added to the systemโs training to improve future interactions.
- Updating escalation rules: If agents notice customers frequently express frustration when the AI tries to resolve complex billing issues, they can suggest adjusting workflows to escalate billing-related queries earlier in the process.
- Stress-testing workflows: Agents can simulate edge casesโlike vague or emotionally charged queriesโto see how the AI responds. Based on gaps identified in testing, they might recommend adding more diverse examples to training data.
This shift often requires new skills for human agents, but it is worth it to ensure customers receive the empathetic interactions they deserve. Agents must understand how agentic AI works, its capabilities and limitations, and how to integrate it into customer interactions.
Genuine empathy can only exist with human oversight
Agentic AI can revolutionize customer service but isnโt a set-it-and-forget-it solution. Without human oversight, even the most advanced AI systems risk making decisions that feel cold, impersonal, or outright harmful to customers.
Empathy isnโt just about language; itโs about actions that reflect the context and emotional weight of a customerโs situation. AI alone canโt fully grasp thatโit needs humans to guide and refine it. That takes intentional design, constant monitoring, and a willingness to adapt.
This means businesses must invest in systems where humans and AI work together. Human agents arenโt just there to handle escalations; they play a critical role in shaping how AI systems learn, evolve, and respond. Agentic AI is only as empathetic as the humans who train and oversee it. Keeping a close eye on it ensures it acts as a valid extension of your team, not a cold, automated barrier.



