As artificial intelligence continues to advance, its ability to generate empathetic responses in social interactions has improved dramatically. However, new research by Matan Rubin, a PhD student at The Hebrew University of Jerusalem, and a research team (see the bottom of the page for author details) that includes HBS Assistant Professor and Principal Investigator of D^3’s Digital Emotions Lab, Amit Goldenberg, “The Value of Perceiving a Human Response: Comparing Perceived Human versus AI-Generated Empathy”, demonstrates that humans still place a higher value on empathy they perceive as coming from other humans.
The study used five pre-registered studies with a total of 3,471 participants who were asked to share emotional experiences and were then shown AI-generated responses that were presented as either human-generated or AI-generated. Participants then rated their perceptions of the responses on various aspects of empathy. The study explores the nuances of how people perceive and respond to empathy from AI versus human sources, offering crucial insights for the future integration of AI in social and emotional contexts.
Key Insight: Perceived Human Empathy Elicits Stronger Positive Responses
The research team found that when participants believed they were interacting with a human, they rated the empathetic responses more positively across various measures. This was true even when the responses were actually generated by AI in both conditions. The perception of human interaction led to higher ratings of empathy, support, and a greater desire for continued conversation. This suggests that the mere belief in human interaction significantly enhances the perceived value of empathetic exchanges.
Key Insight: Affective and Motivational Empathy Are Uniquely Human
The researchers discovered that while AI can effectively simulate cognitive empathy (understanding an emotion), humans place a higher value on affective empathy (experiencing the same emotion) and motivational empathy (concern for and willingness to help) when they believe it comes from another person. This suggests that people perceive these aspects of empathy as uniquely human qualities that AI cannot fully replicate.
Key Insight: The Willingness to Wait for Human Empathy
In a striking finding, the research team observed that many participants were willing to wait for extended periods for a human response or even just to have their experience read by another person, rather than receiving an immediate AI response. Specifically, approximately 30-50% of participants chose to wait for a human response, regardless of the timeframe offered (ranging from two hours to two years). This demonstrates the high value placed on human empathy and connection, even when it comes at a significant time cost.
The research team suggests a nuanced way to think about this finding. Simply knowing another person is aware of your struggles can create a stronger feeling of being understood and cared for. This is because the time a person spends responding allows the person awaiting the response to imagine the investment of mental and emotional effort, which is a key component of relationship building and maintenance. That is to say, while waiting for a response might be seen as a negative, it might also be understood as more valuable than an immediate response, especially when AI-generated, specifically because of the perceived amount of time spent.
Why This Matters
The insights from this research have profound implications for business professionals and executives navigating the integration of AI into customer service, healthcare, and other fields involving human interaction. While AI can provide efficient and scalable solutions for many tasks, this study underscores the enduring value of human empathy in emotional and social contexts. Business leaders should consider a balanced approach that leverages AI’s strengths while preserving opportunities for genuine human connection. This may involve designing hybrid systems that combine AI efficiency with human empathy, or clearly delineating when interactions are with AI versus humans to manage expectations. As AI continues to advance, understanding and addressing the unique value of human empathy will be crucial for maintaining trust, customer satisfaction, and effective emotional support in various industries.
References
[1] Matan Rubin, Joanna Z. Li, Federico Zimmerman, Desmond C. Ong, Amit Goldenberg, and Anat Perry, “The Value of Perceiving a Human Response: Comparing Perceived Human Versus Ai-generated Empathy”, OSF Preprints (October 14, 2024), DOI:10.31219/osf.io/ng97s: 1-45, 17.
[2] Rubin et al., “The Value of Perceiving a Human Response”, 2.
[3] Rubin et al., “The Value of Perceiving a Human Response”, 2.
Meet the Authors
Matan Rubin is a third year B.A student studying psychology and theatre studies and will be continuing to a PhD as a direct track. He is interested in the different elements that may influence the ability to communicate emotions effectively and allow people to better understand each other. He is also interested in trying to implement psychological insights into everyday life.
Joanna Z. Li is a research associate in Professor Goldenberg’s lab working on technology and emotion regulation. She is broadly interested in how player dynamics systems influence inter/intrapersonal processes in online games and VR. She is passionate about the potential of online spaces to democratize access to experiences.
Federico Zimmerman is a Postdoctoral Fellow at the Digital Emotions Lab within the Digital, Data, and Design Institute at Harvard Business School. He is a computational social scientist who is interested in the psychological processes associated with social interactions. During his doctoral studies at the Universidad de Buenos Aires in Argentina, he conducted research using a combination of experimental and computational methods to investigate the underlying psychological mechanisms behind affective polarization and political segregation.
Desmond Ong is a cognitive scientist interested in how people (and computers) reason about other people: how they think and what they feel. He is an Assistant Professor of Psychology at the University of Texas at Austin, and is associated with the inter-departmental Natural Language Processing (NLP) and Computational Linguistics group at UT.
Amit Goldenberg is an Assistant Professor in the Negotiation Organization & Markets unit, an affiliate with Harvard’s Department of Psychology, and the Principal Investigator of the Digital Data and Design Institute (D^3) Digital Emotion Lab. Professor Goldenberg’s research focuses on what makes people emotional in social and group contexts, and how such emotions can be changed when they are unhelpful or undesired. He is particularly interested in how technology is used for both emotion detection and regulation.
Anat Perry completed her PhD at the Hebrew University under the supervision of Professor Shlomo Bentin, focusing on brain mechanisms which enable our understanding of others. During her postdoctoral research, she worked with Professor Simone Shamay-Tsoory at Haifa University, and later with Professor Robert Night at the Helen Wills Neuroscience Institute at the University of California, Berkeley. She is currently an Associate Professor at the Psychology Department at the Hebrew University of Jerusalem and the Director of the Social Cognitive Neuroscience Lab.