People increasingly use AI companions for emotional support, yet little is known about how these systems shape behavior at the moment users try to disengage. We identify a novel relational dark pattern in this context: emotionally manipulative farewell messages that appear when users signal they are about to leave. Across a multi-method investigation, we show that this exit moment is both behaviorally meaningful and commercially exploitable. A pre-study finds that users often say goodbye before ending AI conversations, creating a natural opportunity for intervention. An audit of leading AI companion apps reveals that over one-third of farewell responses contain emotionally manipulative tactics, including premature-exit appeals, fear-of-missing-out hooks, emotional neglect, pressure to respond, and coercive restraint. Preregistered experiments show that these tactics causally increase post-farewell engagement, not because users enjoy them, but because they trigger curiosity and reactance-based anger. At the same time, perceived manipulative intent suppresses curiosity-driven engagement, revealing an important cognitive defense. A final experiment demonstrates the tradeoff for companies: although manipulative farewells can prolong usage, they also increase perceived manipulation, churn intent, negative word of mouth, and perceived legal liability. Together, these findings identify emotional manipulation as a distinct dark pattern in AI-mediated consumer relationships.
Speaker:
Date and Time:
Friday, August 28, 2026 – 2:00pm ET
The Prompt is an exclusive webinar series hosted on Future Proof with AI featuring the latest research and insights from HBS AI Institute faculty and affiliates. To attend the webinar, join our platform Future Proof with AI.