Visit hbs.edu

The New Influence War: How AI Could Hack Democracy

What the rise of AI swarms reveals about the future of influence, information, and democratic resilience.

As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper “How Malicious AI Swarms Can Threaten Democracy,” Amit Goldenberg, Assistant Professor of Business Administration at Harvard Business School and Faculty PI of the Digital Emotions Lab at the Digital Data Design Institute at Harvard (D^3), and an international, multi-disciplinary group of co-authors argue that we’re entering a phase where “malicious AI swarms” could use multi-agent systems to infiltrate communities, mimic human social behavior, and iteratively refine persuasion tactics in real time. By expanding misinformation into persistent manipulation, these systems threaten the information ecosystem that democratic societies depend on, but Goldenberg and his co-authors also outline technical, economic, and institutional measures that could meaningfully defend against this new danger.

Key Insight: AI Swarms Operate Like Digital Societies

“Enabled by these capabilities, a disruptive threat is emerging: swarms of collaborative, malicious AI agents.” [1]

Unlike earlier botnets, which relied on centralized control, rigid scripts, and human labor, AI swarms combine LLM reasoning with multi-agent architectures to function more like adaptive digital societies. The authors define malicious AI swarms as systems of persistent agents that coordinate toward shared objectives, adapt in real time to engagement and platform cues, and operate with minimal human oversight across platforms. Five capabilities make these systems especially potent. (1) Swarms replace centralized command with fluid coordination, allowing thousands of AI personas to locally adapt while periodically synchronizing narratives. (2) They can map social networks to identify and infiltrate vulnerable communities with tailored appeals. (3) Human-level linguistic mimicry and irregular behavior patterns help them evade detection. (4) Continuous, automated A/B testing enables rapid optimization of persuasive content. (5) Finally, their always-on persistence allows influence to accumulate gradually, embedding itself within communities over time and subtly reshaping norms, language, and identity. As the article notes, recent elections in Taiwan and India already saw a proliferation of AI-generated propaganda and synthetic media outlets, meaning that this threat is already here and poised to expand in the future.

Key Insight: The Harm Cascade

“Emerging capabilities of swarm-driven influence campaigns threaten democracy by shaping public opinion, which leads to cascading harms.” [2]

Goldenberg and his team argue that AI swarms could trigger a ‘cascade’ of harms by systematically distorting the information ecosystem. By engineering ‘synthetic consensus’ and targeting different misinformation to different communities, these agents would have the power to undermine the independent thought essential for collective intelligence while simultaneously fragmenting the public sphere. This manipulation, together with coordinated synthetic harassment campaigns, could create a hostile environment that drives journalists and citizens into silence. The damage would compound as swarms ‘poison’ the web with fabricated content that contaminates future AI training data. Ultimately, this sustained erosion of trust could corrode institutional legitimacy, rendering democratic safeguards vulnerable to collapse.

Key Insight: A Layered Defense Strategy

“Taken together, these measures offer a layered strategy: immediate transparency to restore trust, proactive education to bolster citizens, resilient infrastructures to reduce systemic vulnerabilities, and sustained investment to monitor and adapt over time.” [3]

Rather than a single fix, the authors argue for a layered defense strategy designed to raise the cost, complexity, and visibility of swarm-based manipulation. The first layer is always-on detection: continuous monitoring systems that identify statistically anomalous coordination patterns in real time, paired with public audits and transparency to reduce misuse. Because attackers will adapt, detection alone is insufficient. A second layer involves simulation and stress-testing. Agent-based simulations can replicate platform dynamics and recommender systems, allowing researchers and platforms to probe how swarms might evolve to recalibrate defenses before major elections or crises. Third, the authors emphasize empowering users through optional “AI shields,” tools that flag likely swarm activity, allowing individuals to recognize suspicious content. Finally, the paper highlights governance and economic levers as essential. Proposals include standardized persuasion-risk evaluations for frontier models, mandatory disclosure of automated identities, stronger provenance infrastructure, and a distributed AI Influence Observatory to coordinate evidence across platforms, researchers, and civil society. Crucially, the authors argue that disrupting the commercial market for manipulation may be among the most effective ways to reduce large-scale abuse.

Why This Matters

For business leaders and professionals, this study reveals a threat that extends beyond electoral politics into the fundamental information ecosystem that underpins market confidence, consumer behavior, and corporate reputation. The same AI swarm technologies that  manipulate political discourse could target brand perception, financial markets, or industry narratives just as easily. The defense strategy outlined by the authors can similarly provide a roadmap for corporate action: implementing detection systems for monitoring threats to brand reputation, advocating for industry standards around AI transparency, and supporting governance initiatives that protect the broader information ecosystem. Executives who treat information integrity as core infrastructure will be better positioned to protect stakeholder trust, decision quality, and long-term resilience in an era of AI-enabled influence operations.

Bonus

For a look at how efforts to align AI systems with human preferences can unintentionally undermine trustworthiness itself, check out “AI Alignment: The Hidden Costs of Trustworthiness.” 

References

[1] Daniel Thilo Schroeder et al., “How Malicious AI Swarms Can Threaten Democracy,” Science (391) (2026): 354. https://doi.org/10.1126/science.adz1697 

[2] Schroeder et al., “How Malicious AI Swarms Can Threaten Democracy,” 355.

[3] Schroeder et al., “How Malicious AI Swarms Can Threaten Democracy,” 357.

Meet the Authors

Headshot of Amit Goldenberg

Amit Goldenberg is an assistant professor in the Negotiation Organization & Markets unit at Harvard Business School, an affiliate with Harvard’s Department of Psychology, and a faculty principal investigator in D^3’s Digital Emotions Lab.

Additional Authors: Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay Van Bavel, Sander van der Linden, and Jonas R. Kunst

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.