This series introduces Harvard Business School AI Institute Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society.
This article shares insights from Michael Lingzhi Li, Assistant Professor of Business Administration at Harvard Business School who is pursuing research on the topics of artificial intelligence and organizations
1. What drew you to this area of research and how did you first become involved in this work?
I have long been interested in how AI changes the nature of work in high stakes environments such as healthcare, where consistent, safe operation is essential. Much of my broader research agenda focuses on designing decision systems that combine machine learning with human judgment. As AI systems became increasingly capable, it became clear that the central challenge was no longer prediction accuracy alone, but how these systems interact with human decision makers over time.
I became particularly interested in settings where AI is deployed in a human in the loop workflow, with the human expected to serve as the final safeguard against error. Conceptually, this arrangement sounds reassuring. In practice, however, the behavioral dynamics are more complicated. As models improve, humans may reduce effort, defer too quickly, or disengage from critical review. That tension between technical performance and human vigilance is what drew me into studying human AI collaboration more directly.
2. What are some common misconceptions or barriers around the problem you’re working to solve?
A common assumption is that adding AI assistance mechanically increases productivity and quality. In many contexts this is true. However, there are also settings in which AI reduces the effectiveness of human decision makers because individuals outsource cognitive effort to the model. When the AI is usually correct, people may stop actively verifying its outputs. In high stakes domains, this creates a fragile system in which rare errors are more likely to be missed.
Another misconception is that the only design question is how to improve the model. In collaborative systems, performance depends not only on algorithmic accuracy but also on how humans respond behaviorally. Overreliance, complacency, and reduced vigilance are not failures of technology per se, but failures of system design. Addressing these requires thinking beyond prediction and toward incentive and attention management.
3. What research is being done on this topic and how is your approach or perspective unique?
Existing research examines automation bias, trust, and algorithm aversion. Much of it focuses on perceptions or one-shot decisions. My work studies sustained performance in longitudinal workflows and introduces a design intervention: controlled error injection. Instead of only improving model accuracy, we ask whether a small, realistic level of imperfection can optimize joint human AI performance over time.
4. What excites you most about this work and its potential impact?
What excites me most is the possibility of designing collaborative systems that are safer than either humans or AI alone. In many real-world deployments, humans are nominally the final line of defense against model errors. In practice, as AI performance improves, humans often disengage and cease to function as an effective safeguard.
If we can show that modest, carefully calibrated error injection sustains vigilance without materially undermining efficiency or trust, it would offer a simple and scalable design principle. Rather than relying on constant retraining or more complex oversight structures, organizations could structure workflows to preserve human attention in a principled way. In domains such as drug safety, clinical decision support, or content moderation, the stakes are high enough that even small improvements in sustained detection performance could have meaningful societal impact.
5. How do you hope working with HBS AI Institute will amplify the impact of your work?
Working with HBS AI Institute would deepen my exposure to how organizations are actually deploying AI in operational settings. Many of the challenges in human AI collaboration only become visible at scale, when systems are embedded in real workflows with real incentives and constraints. Engaging with practitioners across industries will provide insight into where vigilance breaks down, what governance structures are emerging, and which design choices are proving effective in practice.
The HBS AI Institute also creates an opportunity to stress test and refine our research questions against real organizational problems. That feedback loop is critical. It ensures the work remains grounded in implementation realities rather than abstract laboratory settings, and it increases the likelihood that the design principles we develop can meaningfully influence how firms structure human oversight in AI enabled systems.
6. What changes do you hope to see in your field as a result of the work being done in this area?
I hope the field shifts from evaluating standalone model accuracy to evaluating joint human AI performance over time. The relevant unit of analysis should be the collaborative system, not just the algorithm.
7. What’s an essential area in which AI and digital technologies will reshape the way businesses or society operate in the long run that we may not be considering?
One underappreciated area is how AI will reshape accountability structures. As AI systems take on more cognitive tasks, responsibility for errors becomes diffused across algorithms, designers, and human overseers. In many organizations, humans remain formally accountable even when they exert limited real control.
Over time, this tension will force firms to rethink governance, oversight, and liability. The key question will not only be whether AI is accurate, but who is responsible when collaborative systems fail. Designing systems that preserve meaningful human agency and attention is therefore not only a performance issue but also an institutional one.
The Harvard Business School AI Institute Associates Program supports and accelerates faculty research into the ways AI and digital technologies are reshaping companies, organizations, society, and practice.