Trustworthy AI Lab
Artificial Intelligence (AI) is everywhere. It’s in the devices we use, the physical spaces we inhabit, and the interactions we have with brands. The applications are vast.
AI makes our lives easier, but also adds to our anxiety.
The ethics of AI is an emerging area of academic research and corporate strategy. There are big questions to answer: How do we run companies with algorithms? We want to know what that means for leaders. Will AI and machine learning (ML) help us make responsible, data-driven, decisions? Does AI have a positive impact on equity and inclusion for minority groups? We will establish metrics. How do we balance the value of algorithmic insights with individuals’ privacy rights?
These questions are worth exploring because they mark the new frontiers of business in a digital-first world, and will present some of the thorniest decisions facing modern managers.
The D^3 Institute’s Trustworthy AI Lab is at the forefront of this investigation. It is breaking new ground in the way AI and ML are applied to problem-solving and operations within enterprises. Our insights will shape how a generation of managers thinks about the role of AI within organizations.
The Trustworthy AI Lab is led by:
- Marco Iansiti is David Sarnoff Professor of Business Administration at HBS. He is the co-author of ‘Competing In the Age of AI’.
- Himabindu Lakkaraju is an Assistant Professor of Business Administration at HBS. She holds a PhD from Stanford University.
- Seth Neel is an Assistant Professor of Business Administration at HBS. He studies theory and applications of machine learning with a focus on privacy and fairness. He holds a PhD from the University of Pennsylvania.
- Salil Vadhan, Vicky Joseph Professor of Computer Science and Applied Mathematics (SEAS) holds a PhD from the Massachusetts Institute of Technology. He leads Harvard’s Privacy Tools Project and co-directs the OpenDP Project for open-source privacy software.
The lab will focus on:
Operating improvements, worker efficiency, and managerial decision-making. It will consider:
- Integration of AI tools and machine learning (ML) models into workflows.
- Seamless interaction between humans and ML models to enable critical tasks.
Research on the algorithmic aspects of AI and ethics focuses on:
- Making AI/ML models easier for humans to understand and apply.
- ML models that are fair to minority groups and robust to adversarial manipulations.
- Use of AI to solve complex problems involving multiple objective functions across multiple time scales.
- Studying the privacy risks in machine learning models and developing privacy-preserving algorithms for building the models in way that is guaranteed to avoid these risks.
The Trustworthy AI Lab is participating in the Generative AI Working Group.