Have you ever received a response from ChatGPT that seems to get you almost too well? A recent preprint review of research, “AI-Generated Empathy: Opportunities, limits, and future directions,” written by a team including Amit Goldenberg, faculty principal investigator in the Digital Emotions Lab at the Digital Data Design Institute at Harvard (D^3), suggests that short interactions with AI might actually be better at making us feel understood and cared for than our fellow humans, that is, at least until we discover that we’re talking to a machine. These findings challenge our fundamental assumptions about empathy, emotional support, and what it means to truly connect in an increasingly digital world.
Key Insight: A New Perspective on Understanding Empathy
“Empathy is in the mind of the beholder.” [1]
Earlier psychological research has focused primarily on the empathizer, studying what makes them more or less empathic, their biases, and their capacity for emotional connection. But the rise of synthetic AI-powered conversation partners flips the perspective, pivoting from the writer (the empathizer) to the receiver (the empathized). While we can’t meaningfully ask whether an AI truly ‘cares’ or ‘shares feelings,’ the focus shifts to the recipient’s perception of empathy, whether that person experiences feeling heard, cared for, and understood.
Key Insight: AI’s Surprising AI Advantage
“Generally speaking, people find text generated by modern LLMs to be more empathic than text written by humans.” [2]
Across diverse contexts like crowdsourced workers, crisis-line supporters, and even medical doctors, AI-generated messages often outperform human-written ones on perceived empathy. Why might that be? AI can consistently produce structured, attentive, and validating language. AI doesn’t get tired or stop trying, and its phrasing can be optimized for clarity and warmth. In short, the authors identify an “AI Advantage” in the ability to generate more consistently empathetic responses than humans can.
Key Insight: Belief Beats Content
“However, as soon as people believe (accurately or not) they are interacting with an AI, they downgrade the value of the text—something that we call the ‘AI Penalty’.” [3]
The flip side is stark: label the very same message as AI, and ratings drop. Termed the “AI Penalty” by the authors, it is strongest on the dimensions of “feeling with” and “caring,” precisely where people expect a human’s emotional labor and intention. The penalty also emerges when people suspect AI involvement in a message they otherwise believed was human. Taken together, the AI Advantage and AI Penalty suggest that people’s cognitive understanding of AI capabilities conflicts with their emotional preferences for human connection.
Why This Matters
For business leaders and executives, understanding these insights is critical for informed decision-making about customer experience, employee well-being, and technology implementation. Companies might consider hybrid approaches where AI augments human empathy rather than replacing it, such as providing real-time coaching to customer service representatives or helping employees craft more supportive communications. Perhaps most importantly, this research highlights the need for leaders to understand the psychological complexity of human-AI interactions. As AI becomes more sophisticated at mimicking human emotional intelligence, success might not just depend on technical capabilities and deployment, but on navigating the complicated ways that people perceive, value, and respond to digital communications.
References
[1] Desmond C. Ong et al., “AI-Generated Empathy: Opportunities, limits, and future directions.” PsyArXiv Preprint (September 23, 2025): 4. Preprint DOI: https://doi.org/10.31234/osf.io/8n5jw_v1.
[2] Ong et al., “AI-Generated Empathy”: 5.
[3] Ong et al., “AI-Generated Empathy”: 7.
Meet the Authors

Desmond C. Ong is an Assistant Professor of Psychology at the University of Texas at Austin.

Amit Goldenberg is Assistant Professor of Business Administration at Harvard Business School, and faculty principal investigator in the Digital Emotions Lab at the Digital Data Design Institute at Harvard (D^3).

Michael Inzlicht is a Professor in the Department of Psychology at the University of Toronto, with a cross-appointment as Professor in the Department of Marketing at the Rotman School of Management.

Anat Perry is an Associate Professor of psychology at the Hebrew University of Jerusalem.