On May 7, the Digital Data Design Institute at Harvard hosted Leading with AI: Exploring Business and Technology Frontiers. During the conference, a panel moderated by Mitchell B. Weiss, Richard L. Menschel Professor of Management Practice at Harvard Business School, panelists Anita Lynch, Board Member at Nasdaq U.S. Exchanges, Jonathan Zittrain, George Bemis Professor of International Law at Harvard Law School, and Noah Feldman, Felix Frankfurter Professor of Law at Harvard Law School, discussed the future of AI regulation. Key themes included the existential risks associated with AI, the urgent need for comprehensive regulatory frameworks, and the steps companies can take to effectively regulate their use of AI.
Executive Insight
The panel suggested emphasizing proactive internal measures, comprehensive regulatory frameworks, and the need for increased transparency as the most important strategic approaches to AI regulations. Strict data governance policies to control access and usage of sensitive data were also deemed crucial.
Key Insight: Existential Threat of AI
The panel explored the existential threat posed by AI, revealing a wide range of perspectives. Lynch emphasized understanding the context behind people’s opinions of AI, suggesting that general pessimism about the future could skew perceptions of AI risks, “[i]f you’re pessimistic about the possibility of the future in general, then you might be more inclined to be pessimistic about the causes as well”. She also shared her optimism about technology, and argued that technological advancements have historically brought positive change, thus viewing the existential risk from AI as very low. However, Weiss questioned Lynch, wondering if excessive optimism might lead to underestimating real dangers associated with AI development and deployment.
Zittrain suggested that the arguments for existential risk are often general, and there are many contingencies we would have to pass before reaching the point of AI becoming an existential threat. Feldman echoed this point noting that we should address more immediate concerns with AI before focusing on the existential threats. He felt that humans tend to import their assumptions onto new technology, and suggested that humans believing AI would cause our extinction if it becomes smarter than us is closer to thoughts explored in science fiction and more a reflection on how we view the world.
Key Insight: Too Early or Too Late on Regulation
The panel debated the timing of AI regulation, focusing on whether it is too early or too late to implement effective policies. Zittrain argued that it would be beneficial to take lessons learned from problems like climate change to create foundational guidance before misuse becomes a larger issue. He emphasized the importance of adaptive and forward-thinking regulations that can evolve alongside AI technologies, “A little bit of deft adjustment now has compound payoff later…I’m not calling for a total moratorium on AI…but I do think some foundational understanding now including the rule of the public and private as this stuff gets embedded [would pay off later]”. Lynch in turn compared it to initiatives taken to regulate past technological innovations. She referenced a recent conversation with Andrew Ng who compared the regulation of AI to the safety protocols around electricity, “AI is like electricity it’s really hard to think about and anticipate all the different ways that it could be used or abused, however, if you extend the analogy there are some basic safety protocols for electricity like having outlets in our homes and knowing not to touch downed power cords”. Although it would be difficult to predict the potential abuses of AI, like the regulation of electricity, safeguards could be put in place and continuously reviewed to diminish negative effects.
Frameworks
Practical Steps for Companies
- Accountability Matrix – Create an Accountability Matrix to guide decisions made around the use of AI in a company. This can ensure that individuals and teams within the company are held reliable for the outcomes of any AI that is deployed.
- Internal Committee Dedicated to Evaluating Risk – The panel emphasized the need for internal supervisory mechanisms to evaluate and manage the risks of AI related decisions. The committee should oversee any AI related activities, ensuring they are compliant with regulations, and implement a series of best practices for risk management.
- Limiting Data Access – Companies should control who can access and use data. Companies can prevent unauthorized usage and potential security data breaches by limiting access to sensitive information.
- Code Reviews for Engineers – The panel suggested establishing an internal system to review the code created by engineers to ensure that there is a high quality of output. By establishing best practices for coding, companies can reduce the risk of errors and maintain high quality standards in AI deployment.
- Licensure for Data Engineers – The panel suggested creating a licensure system for data engineers, with a focus on explainability*. Panelists suggested that there are high standards for many professions that have lower chance of creating widespread risk, yet there are no such standards for data engineers who could potentially create high risk situations when creating AI programs. Companies could require formal certification and ongoing professional education to ensure AI practitioners adhere to ethical standards and regulatory requirements.
*As AI becomes more sophisticated, it is essential that data engineers can understand and explain how a model is making the decisions and choices.
Meet the Speakers
Jonathan Zittrain is the George Bemis Professor of International Law at Harvard Law School. He is also a Professor of Public Policy, Harvard John F. Kennedy School of Government, a professor of computer science at the Harvard School of Engineering and Applied Sciences, director of the Harvard Law School Library, and co-founder and director of Harvard’s Berkman Klein Center for Internet & Society.
Anita Lynch is a seasoned executive with a proven track record of driving strategic initiatives, data leadership, and operational excellence at consumer and enterprise software companies. Anita currently serves on the Board of Nasdaq US Exchanges and three AI-powered private companies; alongside her work as a thought leader and public speaker for data strategy, security, governance and AI ethic including an Executive Fellowship at Harvard Business School.
Noah Feldman is Felix Frankfurter Professor of Law, Chair of the Society of Fellows, and founding director of the Julis-Rabinowitz Program on Jewish and Israeli Law, all at Harvard University. He specializes in constitutional studies, with particular emphasis on power and ethics, design of innovative governance solutions, law and religion, and the history of legal ideas.
Mitch Weiss is the Richard L. Menschel Professor of Management Practice at the Harvard Business School. He created and teaches the school’s course on Public Entrepreneurship—on public leaders and private entrepreneurs who invent a difference in the world. He is the faculty chair of the first year of the MBA program, where for many years he taught The Entrepreneurial Manager. He created and leads the “Teaching with AI” seminar from HBP.
Additional Resources
- We Need to Control AI Agents Now (2024) – Published in The Atlantic, Prof. Zittrain discusses the urgency of regulating AI agents, and the cost of failing to control the potentially dangerous technology.
- Anita Lynch Board of NASDAQ Exchange – Learn more about Anita’s career
- Data Governance Best Practices from Disney – Listen to Anita Lynch’s interview discussing Disney Streaming data practices on Snowflake’s Rise of the Data Cloud Podcast
- Deep Background with Noah Feldman (Podcast) – Every story has a backstory, even in today’s 24-hour news cycle. In Deep Background, Harvard Law School professor and Bloomberg View columnist Noah Feldman brings together a cross-section of expert guests to explore the historical, scientific, legal, and cultural context that help us understand what’s really going on behind the biggest stories in the news. (Deep Background with Noah Feldman)