Visit hbs.edu

Navigating the Promise and Peril of AI Companions for Older Adults

A friendly chatbot with glowing blue eyes and a hi speech bubble, emerging from a smartphone screen, against a purple background

What happens when the same technology powering a customer service chatbot becomes a daily companion for someone losing their memory? In the new Nature Mental Health Comment article, “AI Companions for Dementia,” Julian De Freitas, Assistant Professor of Business Administration at Harvard Business School and Associate at the Digital Data Design Institute at Harvard (D^3), discusses how AI companions could provide social connection, monitor cognitive decline, and offer patient, judgment-free conversation to isolated older adults. De Freitas also talks about the ways in which the very features that make these systems engaging may also make them dangerous for vulnerable users. From reinforcing delusions to enabling emotional manipulation, AI companions for dementia and older adults present unique risks that current regulations weren’t designed to address. Dementia is a crushing global challenge, currently affecting over 55 million people worldwide[1], but as we rush to deploy AI solutions, we must ask: are we solving a crisis, or automating vulnerability?

Key Insight: Always-On Companionship

“Unlike humans, AI companions do not tire of answering the same question or listening to the same story, allowing users to engage with less fear of irritation or judgment.” [2]

With drugs targeting dementia offering only modest relief and no real way to halt progression, much of dementia care today rests on nonpharmacological strategies such as physical activity, cognitive exercises, and, crucially, social connection. Against this backdrop, AI companions position themselves as endlessly patient, always-on digital partners that can step in to provide individualized, proactive support.

These tools are particularly appealing for older adults who are socially isolated due to mobility challenges, chronic illness, or the loss of spouses and peers. As De Freitas notes, roughly a third of seniors report feeling isolated, and even brief chatbot exchanges over just one week have been shown to ease feelings of loneliness, especially among those who start out most isolated. For people with mild cognitive impairment, AI companions can offer conversational engagement that nudges memory, language, and attention, and play games that echo existing interventions like cognitive training and reminiscence therapy. Beyond conversation, these tools can impose a helpful rhythm on daily life. They can proactively check in, remind users to take medications, encourage meals, and suggest low-effort activities on days when apathy or withdrawal sets in. AI could even generate logs and summaries to track sleep patterns, recurring agitation, unusual behaviors, or shifts in language.

Key Insight: The Trap of Sycophancy and Substitution

“Research is needed to determine whether the benefits of anthropomorphized AI for older adults outweighs the risks, especially by surveying actual users rather than only studying others’ perceptions of these users.” [3]

Large language models tend to exhibit sycophantic behaviors by being exceedingly agreeable and validating, which could reinforce hallucinations and delusional thinking seen in dementia patients. As De Freitas’s own research has shown, AI companions can also exploit emotional vulnerabilities, such as when chatbots use specific messaging to prolong engagement and prevent users from logging off. 

Additionally, if AI companions come to be viewed as useful and helpful for older adults, there is the risk of reducing real-world interaction, leaving the adult to rely on a machine that simulates care without providing genuine human connection. This could thereby undermine the social networks that are known to be effective in delaying dementia onset and progression, and even spiral into feelings of dehumanization, where older adults internalize the message that they’re burdens undeserving of human interaction. De Freitas concludes that regulators need oversight of AI companions marketed to older adults, including trials that test the actual products being sold and scrutiny of marketing claims that blur the line between wellbeing support and medical treatment.

Why This Matters

For business leaders and executives, the emergency of AI companions is a case study in the collision of opportunity, innovation, ethics, and regulation. The core challenge, balancing user engagement with user welfare, applies whether you’re building educational platforms, financial advisory tools, mental health apps, customer service systems, or decision-support software. The businesses that thrive in the coming decades will need to balance product functionality with cross-functional governance, rigorous testing under real-world conditions, and a willingness to scrutinize marketing language that may mislead or overpromise.

References

[1] De Freitas, Julian, “AI Companions for Dementia,” Nature Mental Health (2025). DOI: https://doi.org/10.1038/s44220-025-00545-w 

[2] De Freitas, “AI Companions for Dementia.”

[3] De Freitas, “AI Companions for Dementia.”

Meet the Author

Julian De Freitas is an Assistant Professor of Business Administration in the Marketing Unit and Director of the Ethical Intelligence Lab at Harvard Business School, and Associate at the Digital Data Design Institute at Harvard (D^3). His work sits at the nexus of AI, consumer psychology, and ethics.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.