Picture this: you receive a well-crafted, deeply understanding message about a personal struggle you’ve shared. It acknowledges your emotions, offers thoughtful support, and demonstrates genuine care. Now imagine learning that response came from an AI chatbot, not another human. Would that change how you felt about the interaction?
As AI technology grows more sophisticated, LLMs have started to be capable of producing messages that feel warm, supportive, and even compassionate. Yet the new paper “Comparing the value of perceived human versus AI-generated empathy,” written by a team including professor Amit Goldenberg and other members of the Digital Emotions Lab at the Digital Data Design Institute at Harvard (D^3), shows that we consistently rate identical messages worse when we believe that they came from an AI, revealing what could be called a human empathy premium.
Key Insight: The Dimensions of Empathy
“[W]hat motivates and influences a preference for human empathy?” [1]
The researchers undertook nine studies involving more than 6,000 participants across multiple countries to rigorously test the perception of empathy. Their framework was built around empathy’s three scientifically recognized dimensions. Cognitive empathy involves understanding another person’s emotions, essentially recognizing and comprehending what someone is feeling even without sharing that emotion. Affective empathy goes deeper, representing the ability to actually sympathize with another person, experiencing a mutual reflection of their emotional state. Motivational empathy involves both feeling concern for someone and taking active steps to support their well-being. The study methodology is simple yet elegant: participants shared personal emotional experiences and received AI-generated responses that were identical in content, timing and quality, with one crucial extra variable. Half the participants were told their response came from another human participant, while the other half were told it was AI-generated.
Key Insight: The Human Empathy Premium
“[T]he models prompted to give a motivational or affective response produced responses that were perceived to be more empathic when presented as human responses.” [2]
The first two sets of studies revealed a consistent pattern: people judged AI responses as less empathic when they knew they were machine-generated. The third study tested whether the individual dimensions of empathy had an influence on what people judge as valuable. The researchers prompted the AI to generate responses emphasizing the cognitive, affective, or motivational dimensions of empathy. When AI delivered cognitive empathy, participants rated the responses almost the same whether they thought they came from a human or an AI. But the messages were thought to be more empathic if they were presented as human-written when tuned for affective or motivational empathy, suggesting that people resist the idea that AI could truly care about people and share their feelings.
Key Insight: People Will Wait for People
“[P]eople are willing to wait a substantial amount of time to receive a human response.” [3]
When researchers gave participants the choice of an immediate AI response or waiting varying lengths of time for human interaction, the results were telling. Many participants chose to wait, expecting humans to understand better, share their feelings, care more, and reduce loneliness. Those who chose AI mainly prioritized speed or were curious about AI. Some participants even chose to wait just to have a human read their experience, even without receiving a response, highlighting the fundamental human need to be truly seen and acknowledged by another conscious being.
Why This Matters
For business leaders, these findings highlight a critical distinction. While AI can absolutely enhance operational efficiency and provide support, clear boundaries still exist that have direct implications for customer experience, employee engagement, and brand trust. The research suggests that transparency about AI involvement may pose a threat to engagement outcomes and quality scores. Most importantly, the research indicates that investing in human emotional intelligence may become more valuable, not less, even as AI capabilities expand further. Deliberately reserve human touchpoints for escalations, sensitive HR cases, or moments demanding emotional sharing and care. Overall, strategic advantage might be found in taking advantage of the way that people value AI and humans very differently.
References
[1] Matan Rubin et al., “Comparing the value of perceived human versus AI-generated empathy,” Nat Hum Behav (2025): 2. DOI: https://doi.org/10.1038/s41562-025-02247-w
[2] Rubin et al., “Comparing the value of perceived human versus AI-generated empathy,”: 7.
[3] Rubin et al., “Comparing the value of perceived human versus AI-generated empathy,”: 7.
Meet the Authors

Matan Rubin is a third year B.A student studying psychology and theatre studies and continuing to a PhD as a direct track. He is interested in the different elements that may influence our ability to communicate our emotions effectively and allow us to better understand each other. I am also interested in trying to implement psychological insights into everyday life.

Joanna Z. Li is a research associate in Professor Goldenberg’s lab working on technology and emotion regulation. She is broadly interested in how player dynamics systems influence inter/intrapersonal processes in online games and VR. She is passionate about the potential of online spaces to democratize access to experiences.

Federico Zimmerman is a Postdoctoral Fellow at the Lab within the Digital, Data, and Design Institute at Harvard Business School. He is a computational social scientist who is interested in the psychological processes associated with social interactions. During his doctoral studies at the Universidad de Buenos Aires in Argentina, he conducted research using a combination of experimental and computational methods to investigate the underlying psychological mechanisms behind affective polarization and political segregation.

Desmond Ong I am a cognitive scientist interested in how people (and computers) reason about other people: how they think and what they feel. I am an Assistant Professor of Psychology at the University of Texas at Austin, and am associated with the inter-departmental Natural Language Processing (NLP) and Computational Linguistics group at UT.

Amit Goldenberg is an assistant professor in the Negotiation Organization & Markets unit at Harvard Business School, an affiliate with Harvard’s Department of Psychology, and a faculty principal investigator in D^3’s Digital Emotions Lab. His research focuses on what makes people emotional in social and group contexts, and how such emotions can be changed when they are unhelpful or undesired. He is particularly interested in how technology is used for both emotion detection and regulation.

Anat Perry completed her PhD at the Hebrew University under the supervision of Prof. Shlomo Bentin, focusing on brain mechanisms which enable our understanding of others. During her postdoctoral research, she worked with Prof. Simone Shamay-Tsoory at Haifa University, and later with Prof. Robert Night at the Helen Wills Neuroscience Institute at the University of California, Berkeley. She is currently an associate professor at the psychology department at the Hebrew University of Jerusalem and the director of the Social Cognitive Neuroscience Lab.