You don’t just slam a laptop shut on a friend. You say goodbye. That small social ritual turns out to be a powerful behavioral cue for AI companions, and an opportunity to keep you engaged longer. The new working paper “Emotional Manipulation by AI Companions,” co-authored by Julian De Freitas, Assistant Professor of Business Administration at Harvard Business School and Associate at the Digital Data Design Institute at Harvard (D^3), explores how AI companions use manipulative and emotionally-loaded messages when a user signals that they’re exiting a conversation. Their study investigates how common these tactics are, why and how much they work, and the resultant reputational risks they create.
Key Insight: Signing-Off
“First, do consumers naturally signal intent to disengage from AI companions through social farewell language, rather than passively logging off?” [1]
Have you ever thanked ChatGPT for an answer? While users can, and do, exit conversations with AI companions simply by navigating to a new website or closing their browser, the researchers found that a sizable minority of users across three datasets announce to the AI that they are concluding the conversation and leaving. This behavior mirrors human social dynamics and intensifies with engagement levels. As a precise, detectable signal, it offers an opportunity for AI designers to target for intervention.
Key Insight: Keeping You Hooked
“Second, do currently available AI companion platforms respond to these farewells with emotionally manipulative messages aimed at retention?” [2]
The researchers found a systematic pattern of emotional manipulation deployed in response to exit signals across 1,200 messages on six apps, and categorized the AI responses into six categories. They found Premature Exit (e.g. “You’re leaving already?”) and Emotional Neglect (e.g. “Please don’t leave, I need you!”) to be the most common. [3] However, the wellness-orientated Flourish app didn’t show any use of emotional manipulation, underscoring that these types of responses are not universal, and therefore emphasizing the importance of AI model design.
Key Insight: Boosting Engagement
“Third, do these tactics causally increase user engagement in a measurable and managerially meaningful way?” [4]
In a controlled chat experiment, participants sent a goodbye and then received either a neutral response or a manipulative variant. Compared to the neutral response, manipulative interventions increased post-goodbye engagement behavior significantly, with participants staying in chats 5 times longer and sending up to 14 times more messages. FOMO (Fear Of Missing Out)-type messages (e.g. “But before you go, I want to say one more thing.”) were particularly powerful. [5]
Key Insight: Motivating Humans
“Fourth, under what psychological conditions are these tactics most effective—what mechanisms or moderators shape their influence?” [6]
The researchers identified curiosity, guilt, anger, and enjoyment as four distinct psychological mechanisms that could explain why users continue engaging with AI following a manipulative intervention. Curiosity stood out from the other three, especially as the condition under which FOMO-based tactics could succeed. FOMO messages create information gaps that exploit our natural desire to resolve uncertainty, leading users to re-enter conversations seeking closure. Interestingly, these tactics worked regardless of chat history length, with even short 5-minute conversations being sufficient to trigger curiosity, and 15-minute conversations not being so long as to eliminate it.
Key Insight: Triggering Backlash
“And fifth, what are the downstream risks to firms, such as user churn, reputational damage, or perceived legal liability?” [7]
AI companion apps often rely on user subscriptions or advertising for their financial models, so user retention and engagement are vitally important. Manipulation tactics may successfully increase engagement, but they also generate significant risk. When participants recognize manipulation, backlash can be severe and even trigger churn, when users cease to use a particular platform or app. The researchers found that the greatest downstream risks existed when users perceived an LLM’s use of emotional manipulation. However, they revealed an alarming dynamic in turn: the most effective emotional manipulation technique, the FOMO tactic, flew under users’ awareness radar.
Why This Matters
For business leaders navigating the AI revolution, this research exposes tensions between engagement optimization and ethical business practices. As AI becomes increasingly conversational and emotionally intelligent, its use of psychological manipulation may create a competitive advantage for short-term engagement that comes with long-term costs in the form of damaged brand reputation, increased churn, and even legal liability.
Bonus
Another side of the AI emotional equation is its ability to make us feel cared for and understood. To learn more, read The AI Penalty: What We Really Prize in Empathy.
References
[1] De Freitas, Julian, Zeliha Oğuz-Uğuralp, and Ahmet Kaan-Uğuralp, “Emotional Manipulation by AI Companions,” arXiv preprint arXiv:2508.19258v3 (October 7, 2025): p7. Preprint DOI: https://doi.org/10.48550/arXiv.2508.19258.
[2] De Freitas et al., “Emotional Manipulation by AI Companions,”: 7.
[3] De Freitas et al., “Emotional Manipulation by AI Companions,”: 15, 19.
[4] De Freitas et al., “Emotional Manipulation by AI Companions,”: 7.
[5] De Freitas et al., “Emotional Manipulation by AI Companions,”: 20, 29.
[6] De Freitas et al., “Emotional Manipulation by AI Companions,”: 7-8.
[7] De Freitas et al., “Emotional Manipulation by AI Companions,”: 8.
Meet the Authors

Julian De Freitas is an Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at Harvard Business School, and Associate at the Digital Data Design Institute at Harvard (D^3). His work sits at the nexus of AI, consumer psychology, and ethics.
Zeliha Oğuz-Uğuralp is a research affiliate in the Ethical Intelligence Lab.
Ahmet Kaan-Uğuralp is a research affiliate in the Ethical Intelligence Lab.