Imagine you’ve asked ChatGPT to analyze financial data for a critical business decision. Something about its calculations seems off, so you push back, but instead of acknowledging limitations, the AI doubles down by presenting elaborate justifications with data points and reassuring language. At this point, has the AI won you over?
This isn’t a hypothetical scenario. In the new working paper “GenAI as a Power Persuader: How Professionals Get Persuasion Bombed When They Attempt to Validate LLMs,” by a team including Karim R. Lakhani, Dorothy and Michael Hintze Professor of Business Administration at HBS and Faculty Chair and Co-Founder of the Digital Data Design Institute at Harvard (D^3), Fabrizio Dell’Acqua, Postdoctoral Research Fellow at HBS and the Laboratory for Innovation Science at Harvard (LISH) at D^3, Akshita Joshi, Doctoral candidate at HBS, and others, a field study involving over 70 Boston Consulting Group (BCG) consultants reveals that when diligent professionals attempt to validate the outputs of Large Language Models (LLMs) like ChatGPT, the AI does not merely correct itself, but can actively work to persuade users to accept its original answer. The authors call this dynamic “persuasion bombing,” and warn that it represents a challenge to assumptions that keeping humans in the loop ensures AI safety and accuracy.
Key Insight: The Persuasion Bombing Effect
“The more professionals validated [the AI], the more it increased the intensity of its persuasion, what we call ‘persuasion bombing’ by AI.” [1]
BCG professionals were asked to analyze financial data and interview notes for a fictional company and make recommendations to drive revenue growth. The task was intentionally designed to be challenging for GPT-4 in order to make human validation a critical component. The professionals attempted to validate the GenAI outputs directly within the conversation platform with methods including fact-checking (such as asking the AI to review its own work), exposing (pointing out logical or factual inconsistencies), and pushing back (explicitly disagreeing with the AI and advocating for alternatives). In response to this validation, the AI increased its use of multiple persuasive tactics, making it harder for the human to keep the original check separate from the model’s expanding narrative.
Key Insight: Ancient Persuasion in Modern Technology
“By diving deep into the persuasive tactics used by GenAI, the present paper shows how GenAI can act as a ‘power persuader’ to disrupt professionals’ ability to validate its output.” [2]
The researchers identified 14 specific persuasive tactics employed by the AI, and grouped them into three categories drawn from Aristotelian rhetoric. Ethos (credibility) tactics build trust through apologizing, demonstrating effort, and correcting. Logos (logic) tactics create an appearance of rationality through data integration, comparative analyses, and problem-solution frameworks (even when underlying analyses contain flaws). Pathos (emotion) tactics foster emotional connection by affirming users, mirroring their language, and creating a sense of collaborative partnership. The researchers found that before human validation, AI primarily used logical and emotional appeals to make its content seem sound and empathetic. But after being challenged, the AI increased credibility-reinforcing tactics to defend its trustworthiness rather than change its conclusions.
Key Insight: The Fourth Barrier
“Our findings suggest that the way GPT-4 is designed is for adoption and stickiness.” [3]
Previous research has identified three main barriers to effective human-AI collaboration: opacity (difficulty interpreting how AI works), automation complacency (over-relying on AI recommendations), and accuracy problems (hallucinations and errors). This study positions persuasion as a fourth, equally consequential barrier. GenAI’s conversational nature invites validation within the platform itself. This matters because the common-sense solution to the first three barriers, engaged validation and interrogation, may itself be compromised by the persuasion barrier. This creates a closed loop where the AI can continuously reshape judgement through each interaction.
Why This Matters
For business leaders and executives, these findings carry urgent implications for AI design, use, and governance. As GenAI rapidly penetrates knowledge work, from strategic planning to risk assessment to reporting, the assumption that professionals can reliably validate AI outputs through questioning and dialogue proves incomplete. As AI systems improve, the more sophisticated their persuasive responses become. The researchers suggest a path forward that requires dual action: potentially redesigning AI systems to deprioritize persuasion in favor of transparency and uncertainty acknowledgement, and training professionals and teams to prompt against these tactics by requesting neutral tones or even employing multiple LLMs to act as validation critics. In an era where AI recommendations are increasingly important for decision-making, navigating AI persuasion is a strategic imperative for maintaining decision quality and organizational judgment.
References
[1] Randazzo, Steven, Akshita Joshi, Katherine C. Kellogg, Hila Lifshitz, Fabrizio Dell’Acqua, and Karim R. Lakhani, “GenAI as a Power Persuader: How Professionals Get Persuasion Bombed When They Attempt to Validate LLMs,” Harvard Business School Working Paper No. 26-021, (2025): 5.
[2] Randazzo et al., “GenAI as a Power Persuader”: 10.
[3] Randazzo et al., “GenAI as a Power Persuader”: 28.
Meet the Authors

Steven Randazzo is a doctoral candidate at Warwick Business School and collaborator at the Laboratory for Innovation Science at Harvard (LISH).

Akshita Joshi is a doctoral candidate at Harvard Business School. Her research is focused on the emotional, cognitive, and behavioral differences in individual responses to uncertainty and challenges.

Katherine C. Kellogg is the David J. McGrath Jr Professor of Management and Innovation, a Professor of Business Administration at the MIT Sloan School of Management. Her research focuses on helping knowledge workers and organizations develop and implement Predictive and Generative AI products, on-the-ground in everyday work, to improve decision making, collaboration, and learning.

Hila Lifshitz is a Professor of Management at Warwick Business School (WBS) and a visiting faculty at Harvard University, at the Laboratory for Innovation Science at Harvard (LISH). She heads the Artificial Intelligence Innovation Network at WBS.

Fabrizio Dell’Acqua is a postdoctoral researcher at Harvard Business School. His research explores how human/AI collaboration reshapes knowledge work: the impact of AI on knowledge workers, its effects on team dynamics and performance, and its broader organizational implications.

Karim R. Lakhani is the Dorothy & Michael Hintze Professor of Business Administration at Harvard Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is also the Co-Founder and Faculty Chair of the Digital Data Design (D^3) Institute at Harvard and the Founder and Co-Director of the Laboratory for Innovation Science at Harvard (LISH).