Visit hbs.edu

The Hidden Economics of Workplace AI

Business meeting robot and office employees discussing plans to achieve financial goals of corporation. Robot advises guy and girl standing near financial charts created using artificial intelligence

As AI learns directly from how people work, a new tension is emerging about expertise, power, and governance.

In many workplaces, the newest addition to virtual meetings isn’t a colleague, but an AI assistant like Granola or Otter. Suddenly no one has to scramble for action items or wonder who said what. The tool fades into the background while work gets a little smoother. And somewhere downstream, the precise record of how capable people think through a problem, handle a difficult client, or navigate a complex negotiation becomes raw material for an AI model. The convenience is real, and the implications are enormous. The new working paper “Labor as Capital: AI and the Ownership of Expertise,” co-written by D^3 Associate Zoë Cullen, confronts this dynamic head-on. What happens when workers realize that their work habits, insights, and creativity are training the systems that could replace them? Combining survey evidence, a randomized experiment, and formal economic theory, the authors show that when workers understand that the information they give out to AI may strengthen the organization’s hand later, they may change how much they share.

Key Insight: The Surveillance Economy

“[W]ork increasingly generates data about work: records of how exactly people do their jobs.” [1]

The working paper argues that workplace surveillance has created a new pipeline from labor to AI: the data produced while employees do their jobs can now be used to train systems that replicate or automate their expertise. This matters because workers report holding large amounts of valuable “uncodified” knowledge like tacit know-how, judgment, communication skills and context-specific understanding of customers, projects, and processes that are not fully captured in manuals or company-wide wikis (think Atlassian Confluence). The authors find that workers believe they have substantial control over how much of their knowledge becomes visible to employers, whether by documenting more carefully, communicating on or off monitored channels (think Slack), or altering their participation in surveilled workflows. As workers become aware that this information may be used to build AI that performs similar work, they may become less willing to share it. 

Key Insight: When Workers Find Out, They Pull Back

“[W]orkers who are more aware of how their data may be used choose to forgo meaningful payments for both their past and future data.” [2]

In a study with 971 participants drawn from the online survey platform Prolific, the authors randomly showed half the group a video explaining that AI systems can be trained on records of worker behavior, including their own survey responses. The other half watched an equivalent video that said nothing about data being used for AI training. The effect was stark. Among workers who saw the AI-training explanation, 41% refused to share their past survey data in exchange for a $10 bonus, compared to just 25% in the control group. The same workers were also significantly less willing to participate in future surveys at their existing wage. These results drive the paper’s theoretical argument. In the model, workers recognize that the knowledge they reveal today can improve the firm by helping create AI that substitutes for their expertise. Anticipating weaker future bargaining power, workers may withhold knowledge in the present. That withholding is individually rational, but collectively costly: it reduces productivity and limits the quality of the AI systems firms can build. Under the current default, worker awareness does not simply slow adoption because people dislike AI, it slows adoption because workers have reason to protect themselves. 

Key Insight: A Fight Over Ownership and Governance

“[C]ollective bargaining over work data eliminates this externality and can achieve both efficient knowledge sharing and a more equitable division of the gains from AI.” [3]

The paper highlights a gap between what workers prefer and what may best protect them. Workers in the survey favored individual ownership of work data, meaning the right to control and sell their own data for AI development. But because each worker’s knowledge supply (“the recorded aspects of labor” [4] that could train an AI) could be a substitute for one another, each individual sale strengthens the firm’s bargaining position against every other worker. Collective ownership resolves this. When workers bargain jointly and their knowledge supplies are bundled together, one worker’s contribution no longer undermines another’s position. The competition externality disappears. The broader implication is that workplace AI governance should be understood not just as a privacy issue, but as a labor-market and institutional design issue shaped by bargaining power, ownership rights, and collective labor arrangements. 

Why This Matters

For business leaders, this research surfaces a friction that most AI adoption strategies don’t account for yet. The employees whose expertise you most need to encode could be precisely the ones most aware of what’s at stake when they share it. As AI tools become more capable and more visible in the workplace, worker awareness will only rise, and so could strategic withholding. This creates a clear managerial implication: organizations can improve AI adoption not just by deploying better tools, but by discussing employee career concerns directly and giving people more meaningful control over how their work data is used. Firms that treat data governance as part of talent strategy and innovation design, rather than a legal checkbox, may be better positioned to unlock mutual benefit: stronger AI performance, higher productivity, and gains that are shared more broadly by the people helping to build the organization’s future.

Bonus

This paper shows that resistance to workplace AI is not just a matter of fear or inertia, it can emerge whenever new systems redistribute knowledge, bargaining power, or control over how work gets done. For another example, where the friction appears closer to management, check out The Manager’s AI Dilemma for a perspective on how AI can threaten the authority, discretion, and legitimacy of the very roles expected to approve and implement AI in the workplace.

References

[1] Cullen, Zoë, Danielle Li, and Shengwu Li, “Labor as Capital: AI and the Ownership of Expertise,” Working Paper (March 30, 2026): 1.

[2] Cullen et al., “Labor as Capital,” 16.

[3] Cullen et al., “Labor as Capital,” 2.

[4] Cullen et al., “Labor as Capital,” 1.

Meet the Authors

Headshot of Zoe Cullen

Zoë Cullen is Associate Professor of Business Administration at Harvard Business School and Associate at D^3.

Headshot of Danielle Li

Danielle Li is the David Sarnoff Professor of Management of Technology and a Professor at the MIT Sloan School of Management.

Headshot of Shengwu Li

Shengwu Li is Professor of Economics at Harvard University.

Watch a video version of the Insight Article here.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.