What if we treated mental health the way we treated physical fitness—not waiting for a crisis, but building strength as a habit? The new working paper “AI for Proactive Mental Health: A Longitudinal, Multi-Institutional Trial,” by a team of authors including Julian De Freitas, Assistant Professor of Business Administration at Harvard Business School and Associate at the Digital Data Design Institute at Harvard (D^3), reports the findings of a six-week experiment following 486 U.S. college students to test just that: whether a genAI-powered app could deliver ongoing emotional and social well-being engagement. While much recent press around AI focuses on its negative impact on our mental health, this research highlights its potential to cultivate connection, resilience, and emotional well-being at scale, without stigma.
Key Insight: Rethinking Well-Being
“Consistent with our strengths-based framework, we measured key outcomes related to emotional, social, and overall well-being—rather than symptom reduction alone.” [1]
Mental health care has long operated from a deficit-based diagnosis model, including the diagnosis of depression, anxiety, and stress. For this study, the researchers instead started from a framework with three pillars of positive functioning: how people feel day to day (emotional well-being), how connected they are to others (social well-being), and how capable and grounded they feel in life overall (overall well-being). While many people resist admitting they’re struggling—the paper notes ‘shame’ or ‘stigma’ are an obstacle for 62% of Americans in need of treatment—there’s no such obstacle to conversations around flourishing. [2]
Key Insight: AI That Actually Helps
“To this end, we employ Flourish (https://www.myflourish.ai/) a mobile app launched in 2024 that integrates generative AI with decades of research in well-being science to deliver personalized, gamified, strengths-based mental health support.” [3]
At the center of the trial is Flourish, a mobile app built on what the researchers call the STAR framework: Science-based, Timely, Action-oriented, and Real-life-focused. Behind the scenes, Flourish runs as an AI-native system, a modular architecture that orchestrates multiple LLM prompts and workflows to tailor content to each user’s needs and context. Students in the treatment group interacted with “Sunnie,” an AI conversational agent that offers real-time check-ins, guided reflections, and personalized exercises such as reframing unhelpful thoughts, practicing gratitude, or planning meaningful social actions. The treatment group was asked to use the app just twice a week, but the researchers found that they tended to use it 3.49 days per week.
Key Insight: The Dual Power of Boosting and Buffering
“This intervention may therefore be best positioned as a proactive, scalable, well-being tool rather than a replacement for clinical treatment.” [4]
So did the AI coach actually make a difference? Over the six-week period, students using Flourish showed increased positive affect, with their sense of calm actively rising above baseline levels (a boosting effect). Meanwhile their overall sense of well-being remained stable (a buffering effect). It was as if the app lifted people up and held them steady against the natural erosion of well-being that happens during a stressful semester. Social outcomes were particularly striking. Students using Flourish reported reduced loneliness and an increased sense of belonging and closeness to their campus communities, a particularly important result in light of well-documented disconnection among today’s young adults. Clinical indicators did not show strong changes, but the authors argue that this likely reflects both the nonclinical baseline of the sample and the focus of the intervention. Taken together, the results support the study’s core positioning: AI-guided tools can be impactful and scalable when framed as proactive well-being support, not as a substitute for professional help.
Why This Matters
For business leaders and executives wrestling with how to apply AI responsibly, this study offers a concrete, data-backed example of how it could meaningfully improve the lives of a large population. This has obvious implications beyond college students: employers, health plans, and digital health innovators are all seeking ways to create always-available, low-friction entry points into mental health support. As one participant reflected, the app helped them “gain clarity and move forward on a path to recovery and evolution of myself with ease’.” [5]
Bonus
For the flip side of the AI ethics debate: if you’re curious how AI designers across industries are pushing (and sometimes crossing) emotional boundaries, check out One More Thing… How AI Companions Keep You Online for a look at the ethics of retention and the psychology of AI manipulation.
References
[1] Cachia, Julie Y.A. et al., “AI for Proactive Mental Health: A Longitudinal, Multi-Institutional Trial,” Harvard Business School Working Paper No. 26-030 (November 10, 2025): 6, https://ssrn.com/abstract=5718163.
[2] Cachia et al., “AI for Proactive Mental Health,” 5.
[3] Cachia et al., “AI for Proactive Mental Health,” 6.
[4] Cachia et al., “AI for Proactive Mental Health, 11.
[5] Cachia et al., “AI for Proactive Mental Health, 11.
Meet the Authors

Julie Y. A. Cachia is co-founder of Flourish Science.

Xuan Zhao is co-founder of Flourish Science and a Behavioral Scientist at Stanford University.

John Hunter is an Assistant Professor at Chapman University.

Delancey Wu is an Assistant Teaching Professor at the University of Washington.

Eta Lin is Professor of Psychology at Foothill College.

Julian De Freitas is an Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at Harvard Business School, and Associate at the Digital Data Design Institute at Harvard (D^3). His work sits at the nexus of AI, consumer psychology, and ethics.