Visit hbs.edu

Evidence at the Core: How Policy Can Shape AI’s Future

AI Library

As AI technology advances, policymakers will face the crucial task of how to steer its development responsibly. In the new paper published in Science, “Advancing science- and evidence-based AI policy,” a multidisciplinary group of experts, including Himabindu Lakkaraju, Assistant Professor of Business Administration at Harvard Business School and Principal Investigator in the Trustworthy AI Lab at the Digital Data Design Institute at Harvard (D^3), argue that the future of AI governance depends on robust support for evidence utilization and generation. 

Key Insight: Evidence Must Drive AI Policy

“Defining what counts as (credible) evidence is the first hurdle for applying evidence-based policy to an AI context.” [1]

The authors stress that the idea of evidence itself is not simple or straightforward. What qualifies as evidence can vary across fields: in health policy, randomized control trials serve as the gold standard, while in economics, forecasts and theoretical models hold weight. And history illustrates that evidence will often be questioned or ignored: the tobacco industry leaned on inconclusive studies to stall public health measures, and fossil fuel companies downplay climate risks despite knowing otherwise. These examples show that the tasks of defining, evaluating, and acting on evidence are urgent and complex. In response to these challenges, the authors encourage the US government to utilize the Foundations for Evidence-Based Policymaking Act (Evidence Act).

Key Insight: Policy Can Accelerate Evidence Generation

“We recommend that policy-makers require major AI companies to disclose more information about their safety practices to governments and, especially, to the public.” [2]

The authors propose several mechanisms to make policy the driver of evidence creation. Policymakers should incentivize pre-release evaluations, ensuring that risks (such as using AI for malicious purposes, the likelihood of AI hallucinations, or the prevalence of AI generating copyrighted material) are measured before companies deploy new models. They also call for increased transparency, citing findings from the 2024 Foundation Model Transparency Index that top AI companies fall short when it comes to publicly reporting their risk-mitigation practices. They recommend post-deployment monitoring, such as adverse-event reporting systems that track concrete instances of harm once models are in use. Finally, they encourage protections for third-party research, noting that independent investigators often face legal and contractual barriers when probing AI systems. Safe harbor provisions, modeled on cybersecurity law, would enable such research to proceed in the public interest. Together, these measures would expand the evidence base and allow AI policy to evolve in step with the technology itself.

Key Insight: Consensus in a Fragmented Field

“Scientific consensus, including on areas of uncertainty or immaturity, is a powerful primitive for better AI policy.” [3]

The AI research and policy community is currently divided, with divergent views on the seriousness of risks and the speed of technological progress. This lack of alignment makes it difficult to establish clear, effective policy responses. Drawing from precedents in climate governance and disaster policy, the authors call for deliberate processes that foster consensus, even amid uncertainty. Global initiatives, such as the UN’s High-Level Advisory Board on AI and proposals for an International Scientific Panel, aim to provide shared baselines of evidence. Such consensus would not eliminate debate but would ensure that disagreements unfold with a common evidentiary framework, strengthening the legitimacy and durability of policy decisions.

Why This Matters

As AI becomes more central to business operations, having trustworthy and reliable systems will be crucial. Business leaders and executives will benefit from understanding the growing landscape of AI policy, supporting evidence-based foundations for AI technology, and following the guidance of institutions that produce independent research. By aligning with these principles, companies will not only be ready to comply with emerging regulations, but will also be a step ahead to build trust with customers and stakeholders. As the authors conclude, governing AI will be one of the grand challenges of the 21st century, and informed business leaders will have an important role to play facing it.

References

[1] Rishi Bommasani et al., “Advancing science- and evidence-based AI policy,” Science 389 (2025): 459. DOI: 10.1126/science.adu8449

[2] Bommasani et al., “Advancing science- and evidence-based AI policy,” 460.

[3] Bommasani et al., “Advancing science- and evidence-based AI policy,” 461.

Meet the Authors

Himabindu Lakkaraju is an Assistant Professor of Business Administration at Harvard Business School and PI in D^3’s Trustworthy AI Lab. She is also a faculty affiliate in the Department of Computer Science at Harvard University, the Harvard Data Science Initiative, Center for Research on Computation and Society, and the Laboratory of Innovation Science at Harvard. Professor Lakkaraju’s research focuses on the algorithmic, practical, and ethical implications of deploying AI models in domains involving high-stakes decisions such as healthcare, business, and policy.

Additional Authors: Rishi Bommasani, Sanjeev Arora, Jennifer Chayes, Yejin Choi, Mariano-Florentino Cuéllar, Li Fei-Fei, Daniel E. Ho, Dan Jurafsky, Sanmi Koyejo, Arvind Narayanan, Alondra Nelson, Emma Pierson, Joelle Pineau, Scott Singer, Gaël Varoquaux, Suresh Venkatasubramanian, Ion Stoica, Percy Liang, and Dawn Song

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.