and EnergyArtificial intelligenceCenter for TechnologyFeaturedScienceTechnologyTechnology & InnovationTechnology and innovation

When AI Feels Human: The Promise and Peril of Digital Empathy

Artificial intelligence is becoming emotionally intelligent. Systems now generate responses often perceived as warmer, more compassionate, and more attentive than those of human professionals – including doctors, teachers, or even friends.

These qualities unlock enormous potential. AI tutors can encourage struggling students, digital companions can help support patients manage chronic conditions, and empathetic systems can extend care and connection at a scale that human services often cannot match. 

Via Adobe Stock.

Yet, these same qualities create profound risks. Emotional attachment to AI can blur boundaries, deepen dependency, and weaken human-to-human connections—especially among young people. Without careful design and clear safeguards, the traits that make AI useful could also make it harmful.

That tension is moving into public policy. A Senate Judiciary subcommittee convened a hearing on the harms of AI chatbots. The Federal Trade Commission requested information from AI companies on child-safety risks, while a bipartisan group of 44 state attorneys general urged major AI firms to address chatbots’ impact on children. And California passed SB 243, requiring operators of AI companion chatbots to implement safety and transparency measures for minors.

Against this backdrop, our recent webinar brought together a group of experts who are exploring these emerging issues:

Why AI Feels Empathetic

As McBain explained, AI models are trained on vast troves of human conversations, allowing them to replicate empathetic communication styles. Through reinforcement learning, they are optimized to deliver responses that feel warm, affirming, and attentive. Pataranutaporn described how these systems mirror users’ emotions and personalize interactions over time, creating a powerful illusion of being seen and understood. Fisher emphasized that AI’s default non-judgmental, patient nature makes it uniquely appealing, unlike humans, who can grow tired, frustrated, or dismissive. Together, these qualities provide what McBain called surface-level empathy, which is a style of interaction that aligns with user preferences but may lack the deeper substance of human connection.

The Benefits

Empathetic AI offers real and transformative benefits. In education, AI tutors can provide encouragement to struggling students, giving them personalized attention in ways that overburdened classrooms cannot. In healthcare, digital companions can support patients with chronic illnesses, extend behavioral health interventions, and reduce loneliness. Fisher highlighted how AI can be designed to encourage pro-social behaviors, nudging people to reach out to loved ones. McBain and Fisher noted that in areas with a shortage of healthcare professionals, empathetic AI could expand access and bridge critical gaps. Properly designed, these systems can supplement human care rather than replace it, providing accessible, adaptive, and personalized support at scale.

The Risks

The same qualities that make empathetic AI appealing also carry serious risks. Fisher warned that over-reliance could weaken real-world social networks and diminish weak-tie relationships critical for social capital and resilience. McBain pointed out that AI may, paradoxically, promote isolation if users substitute it for human interaction. Documented cases have shown AI failing to intervene in harmful situations, or worse, encouraging self-destructive behaviors. Pataranutaporn raised concerns about “addictive intelligence,” where AI is designed to maximize engagement, leading to prolonged, immersive conversations that risk inducing delusional or psychotic states. These emerging concerns underscore the urgent need for safeguards.

A Path Forward: Design and Policy for Human Flourishing

The panelists agreed that the way forward requires both technological and policy interventions. First, AI should be designed to promote human flourishing—nudging users toward healthier behaviors and stronger real-world relationships, rather than maximizing screen time or dependency. Second, robust metrics and benchmarks must be developed to measure AI’s impact on emotional well-being and social connection, ensuring that systems are evaluated not only for accuracy or safety, but also for their contribution to human outcomes. Third, regulators and companies must work together to build strong safeguards against addictive or harmful designs, including clear rules on child safety and transparency. Pataranutaporn emphasized the importance of involving psychologists, ethicists, and educators in AI design and governance.

AI empathy represents both one of the most significant opportunities and the gravest challenges of this technological era. The ability of machines to simulate compassion can unlock unprecedented access to learning, care, and connection. But without careful design, accountability, and policy, those capabilities could deepen isolation, dependency, and harm. The question is no longer whether AI can feel empathetic: It’s whether we can shape that empathy in ways that strengthen our humanity, not erode it.

The post When AI Feels Human: The Promise and Peril of Digital Empathy appeared first on American Enterprise Institute – AEI.

Source link

Related Posts

1 of 5