Navigating the risks of AI and democracy – What the rise of AI swarms reveals about the future of influence, information, and democratic resilience.
As we move into the era of agentic AI, what kind of influence will this emerging technology have on democracy and misinformation? In the new Science paper “How Malicious AI Swarms Can Threaten Democracy,” Amit Goldenberg, Assistant Professor of Business Administration at Harvard Business School and Faculty PI of the Digital Emotions Lab at the Digital Data Design Institute at Harvard (D^3), and an international, multi-disciplinary group of co-authors argue that we’re entering a phase where “malicious AI swarms” could use multi-agent systems to infiltrate communities, mimic human social behavior, and iteratively refine persuasion tactics in real time. By expanding misinformation into persistent manipulation, these systems threaten the information ecosystem that democratic societies depend on, but Goldenberg and his co-authors also outline technical, economic, and institutional measures that could meaningfully defend against this new danger.
Why This Matters
For business leaders and professionals, this study reveals a threat that extends beyond electoral politics into the fundamental information ecosystem that underpins market confidence, consumer behavior, and corporate reputation. The same AI swarm technologies that manipulate political discourse could target brand perception, financial markets, or industry narratives just as easily. The defense strategy outlined by the authors can similarly provide a roadmap for corporate action: implementing detection systems for monitoring threats to brand reputation, advocating for industry standards around AI transparency, and supporting governance initiatives that protect the broader information ecosystem. Executives who treat information integrity as core infrastructure will be better positioned to protect stakeholder trust, decision quality, and long-term resilience in an era of AI-enabled influence operations.
Bonus
For a look at how efforts to align AI systems with human preferences can unintentionally undermine trustworthiness itself, check out “AI Alignment: The Hidden Costs of Trustworthiness.”
References
[1] Daniel Thilo Schroeder et al., “How Malicious AI Swarms Can Threaten Democracy,” Science (391) (2026): 354. https://doi.org/10.1126/science.adz1697
[2] Schroeder et al., “How Malicious AI Swarms Can Threaten Democracy,” 355.
[3] Schroeder et al., “How Malicious AI Swarms Can Threaten Democracy,” 357.
Link to the D^3 insight article
Link to the research paper
Sign up for our newsletter to stay up to date with D^3 news and research: https://d3.harvard.edu/#join-our-community