Visit hbs.edu

Drawing the Line on AI Usage in the Workplace

As AI systems increasingly outperform humans across a range of tasks, the economic logic seems clear: more capable, more cost-effective AI should lead to widespread automation. The new Harvard Business School working paper, “Performance of Principle: Resistance to Artificial Intelligence in the U.S. Labor Market,” co-authored by Simon Friis, postdoctoral fellow at the Laboratory for Innovation Science at Harvard (LISH) – part of the Digital Data Design Institute at Harvard (D^3), and James W. Riley, Assistant Professor of Business Administration at Harvard Business School, puts that hypothesis to the test and reveals a more nuanced answer. The issue isn’t just what AI can do, but what we’ll allow it to do.

Key Insight: Mapping AI Resistance

“We conducted a survey of 2,357 U.S. adults designed to measure public support for AI automation and augmentation across a comprehensive set of occupations.” [1]

Participants rated a sample from 940 occupations twice: first under current AI capabilities, then imagining AI that can exceed humans at the job while doing so at a lower cost. The researchers also developed and validated a new scale meant to measure moral repugnance towards AI, the perception that using AI in certain contexts is inherently wrong, irrespective of benefits. This scale thereby taps into fundamental concerns about human dignity, betrayal, and categorical prohibitions that no amount of engineering can overcome. As a result, the researchers came to distinguish two fundamentally different sources of resistance to AI: performance-based concerns and principle-based objections.

Key Insight: Performance Concerns Fade Fast

“Public support for AI-driven automation nearly doubles—from 30% to 58% of occupations—when AI is described as clearly outperforming human workers, suggesting that most resistance is contingent on perceived capability.” [2]

The researchers identify performance-based resistance to AI as opposition due to AI’s current technical capabilities, including factors such as accuracy, reliability, cost, and speed. We might expect this type of resistance to recede as AI technology becomes more capable and cost-effective over time, a result backed up by the study. This was especially true for occupations that were deemed morally permissible for AI help (augmentation) and replacement (automation) like clerks, transportation planners, and data entry keyers. 

Key Insight: The Principle Line

“[O]ur findings reveal a sharply delimited moral frontier, where a small subset of sacrosanct occupations remains off-limits, within an otherwise permissive labor market increasingly open to AI as performance improves.” [3]

Other occupations, including clergy, childcare workers, and therapists, fall into the category of principle-based resistance towards AI. In these cases, AI faces complete rejection that doesn’t budge even when it’s positioned as better, faster, and cheaper. The use of AI in these roles is deemed morally repugnant regardless of capability. What makes these occupations special? They share common threads of caregiving, emotional labor, public speaking, and spiritual leadership. The researchers highlight that the dynamic between AI capabilities and human repugnance creates “moral friction zones” where capability meets rejection (e.g. school psychologists and fraud examiners) and “latent zones” where acceptance is actually ahead of current ability (e.g. cashiers, conveyor operators). [4]

Why This Matters

For business leaders and executives, this research is both liberating and sobering. Liberating, because a large share of public hesitation is performance-based: as your models improve, acceptance will follow in line. Sobering, because a line remains where AI is judged intrinsically inappropriate. The strategic response isn’t abandoning AI, but designing hybrid solutions that preserve human touchpoints in morally sensitive tasks, carefully framing AI as augmentation rather than replacement, and investing in transparency and ethics communication. 

Bonus

Just as this research shows that better AI doesn’t guarantee broader acceptance, earlier D^3 work revealed that improving AI capabilities can actually reverse inequality effects in unexpected ways. For more on how AI’s relationship with workers shifts as technology advances, check out Who Benefits When Bots Get Better?

References

[1] Simon Friis and James W. Riley, “Performance or Principle: Resistance to Artificial Intelligence in the U.S. Labor Market,” Harvard Business School Working Paper No. 26-017 (October 6, 2025): 6, https://ssrn.com/abstract=5560401

[2] Friis and Riley, “Performance or Principle,” 5.

[3] Friis and Riley, “Performance or Principle,” 5.

[4] Friis and Riley, “Performance or Principle,” 16.

Meet the Authors

Headshot of Simon Friis

Simon Friis is a postdoctoral fellow at the Laboratory for Innovation Science at Harvard (LISH), part of the Digital Data Design Institute at Harvard (D^3). His research focuses on the social and economic impacts of generative AI.

Headshot of James Riley

James W. Riley is an Assistant Professor of Business Administration in the Organizational Behavior unit at Harvard Business School. He is an economic sociologist, conducting ethnographic research to produce qualitative studies on the role of status, norms, social valuations, and organizational culture within innovation-driven organizations, creative industries, and cultural markets.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.