If AI can accept light supervision and then be off and running, what does it mean for how leaders and organizations design work, govern risk, and account for value? Drawing on perspectives from Jen Stave
Jen Stave
, Executive Director of the Digital Data Design (D^3) Institute at Harvard, Columbia Business School’s Stephan Meier, and Salesforce CEO Marc Benioff, the recent New York Times Shop Talk article “Where Human Labor Meets ‘Digital Labor’” briefly explores the rise and implications of AI agents that can act like teammates or supervisees.
Key Insight: Agentic AI as Managed Teammates
“Like a human employee, these tools would work independently with a bit of management.”
Jen Stave
Agentic tools are moving beyond chatbots and image generation. Unlike traditional automation that follows rigid scripts, AI agents function more like human employees: capable of independent decision-making after being given high-level goals and objectives.
Key Insight: An Uncertain Future
“How the fruits of digital labor will be treated in economic terms is still unsettled.”
Jen Stave
On one hand, the impact of AI is already here and being measured, as evidenced by how the use of AI agents at Salesforce led to a 17% customer service cost reduction over nine months. But the article also raises a range of undecided questions related to economic capture, quality and accountability, and the right balance between human and AI worker numbers.
Why This Matters
For forward-thinking executives, increasingly the question isn’t whether to adopt agentic AI, but how to operationalize it productively and responsibly. While the efficiency gains are compelling, success requires thoughtful integration by leaders who are ready to address challenges of workforce transition, quality control, and ROI measurement.
Bonus
To read more about Agentic AI and digital labor, read “Agentic AI is Already Changing the Workforce,” co-authored by Jen Stave, for the Harvard Business Review.