Visit hbs.edu

The Three Ways Professionals Work with AI – Which One Are You?

Workers have moved past the initial shock of Generative AI’s arrival. The tool is here, it’s accessible, and it might be open in one of your browser tabs right now. But a critical challenge remains: knowing that you should use AI is very different from knowing how to weave it into the complex, interconnected reality of high-level problem solving. In the new working paper “Cyborgs, Centaurs and Self-Automators: The Three Modes of Human-GenAI Knowledge Work and Their Implications for Skilling and the Future of Expertise,” a team including members of the Digital Data Design Institute at Harvard (D^3) studied over 200 consultants from the Boston Consulting Group (BCG) as they completed the same strategic task. They found that professionals segmented into three distinct “species” of AI users: Cyborgs, Centaurs, and Self-Automators. 

Key Insight: Cyborgs – Fluid Collaborators

“Their collaboration unfolded as an iterative dialogue: probing AI outputs, extending ideas, and validating results in a seamless rhythm of joint problem-solving.” [1]

Cyborgs are the study’s most common type, 60% of participants, and they reflect Fused Co-Creation, where GenAI is woven throughout the workflow. The human still decides what they’re trying to accomplish, but GenAI often drives how the work gets executed: drafting, analyzing, generating options, and reworking outputs through back-and-forth interaction. This isn’t ‘prompt once, accept once.’ Cyborgs use interaction practices like modularizing (breaking tasks into steps), validating, adding data to rerun analysis, and pushing back when outputs conflict with their own view. 

Thanks to this human-AI integration, Cyborgs developed entirely new AI-related expertise, what the researchers call “newskilling,” while maintaining their domain knowledge. But the study also flags a persistent risk: even with best practices like validation, GenAI can confidently reinforce the wrong path, meaning Cyborg fluency needs to include knowing when not to trust the conversation.

Key Insight: Centaurs – Strategic Selectors

“These professionals remained firmly in the driver’s seat, leveraging GenAI to enhance efficiency and polish outputs without surrendering their judgment.” [2]

14% of consultants worked as Centaurs, professionals who engaged in what the researchers call Directed Co-Creation. Like the mythical half-human, half-horse creature, these workers maintained a clear division of labor: humans decided both what needed to be done and how to do it, using AI selectively for specific support tasks. Centaurs used AI primarily through three practices: mapping the problem domain (asking AI for general information), gathering methods information (requesting specific techniques or formulas), and refining their own human-generated content. For example, one consultant asked GenAI, “How do I calculate the market size growth rate of some industry from 2013 to 2017?” [3] After receiving the formula, they performed the calculation in Excel rather than delegating it to AI.

The payoff? Centaurs achieved the highest accuracy in business recommendations among all three groups. They developed and strengthened their domain expertise by treating AI as an intelligent search engine and writing assistant. However, Centaurs also faced a trade-off: while they upskilled in task-related capabilities, they didn’t develop new AI-related expertise. They remained cautious about changing their established workflows, with some expressing ethical concerns about taking credit for AI-generated work.

Key Insight: Self-Automators – Dangerous Delegates

“Their work was fast and polished but lacked depth, resembling outputs completed for them rather than with them.” [4]

The final group is the Self-Automator, comprising 27% of the participants. These professionals engaged in Abdicated Co-Creation, consolidating the entire problem-solving workflow into one or two interactions, copying all data into a single prompt, and accepting AI’s outputs with minimal engagement. 44% of this group accepted AI’s output without any modification, while the rest made only superficial edits. The researchers are careful here: abdication isn’t always bad. For routine tasks, tight deadlines, or problems that sit well within GenAI’s capabilities, full delegation may be efficient, and Self-Automators saw immediate productivity gains.

But when the work requires judgment—framing the real problem, deciding what’s important, evaluating competing narratives—abdication can hollow out the very capabilities that make professionals valuable. In the study’s framing, when you give up control over what you do, you rarely keep control over how you do it. These professionals developed neither domain expertise nor AI-related skills.

Why This Matters

For business leaders and executives, this research suggests AI implementation guidelines for strategic task alignment. When the cost of error is high, such as in financial forecasting or operational diagnostics, leaders should recommend Centaur behavior by requiring professionals to execute the core analysis themselves while using AI for targeted support. This will maximize accuracy while ensuring teams continue to deepen essential domain expertise. Conversely, for tasks requiring rhetorical flair, ideation, or stakeholder persuasion, the Cyborg mode should be encouraged. That style of iterative, integrated looping allows for greater creative extension and the development of cutting-edge AI fluency. Ultimately, the most valuable professional of the future might not be a fixed type, but an adapter thinker capable of toggle-switching between these modes. 

Bonus

This research builds directly on a core theme in D^3 research: understanding how AI reshapes not just productivity, but the nature of expertise and collaboration itself. To read more about how AI can functionally substitute for or augment human teammates, breaking down silos and enabling individuals to perform at team-level quality, check out The Cybernetic Teammate: How AI is Reshaping Collaboration and Expertise in the Workplace.

References

[1] Steven Randazzo et al., “Cyborgs, Centaurs and Self-Automators: The Three Modes of Human-GenAI Knowledge Work and Their Implications for Skilling and the Future of Expertise,” The Wharton School Research Paper, Harvard Business School Working Paper No. 26-036 (December 08, 2025): 12. https://ssrn.com/abstract=4921696 

[2] Randazzo et al., “Cyborgs, Centaurs and Self-Automators,” 12.

[3] Randazzo et al., “Cyborgs, Centaurs and Self-Automators,” 26.

[4] Randazzo et al., “Cyborgs, Centaurs and Self-Automators,” 12.

Meet the Authors

Photograph of Steven Randazzo

Steven Randazzo is a doctoral candidate at Warwick Business School and collaborator at the Laboratory for Innovation Science at Harvard (LISH).

Headshot of Hila Lifshitz

Hila Lifshitz is a Professor of Management at Warwick Business School (WBS) and a visiting faculty at Harvard University, at the Laboratory for Innovation Science at Harvard (LISH). She heads the Artificial Intelligence Innovation Network at WBS.

Headshot of Kate Kellogg

Katherine C. Kellogg is the David J. McGrath Jr Professor of Management and Innovation, a Professor of Business Administration at the MIT Sloan School of Management. Her research focuses on helping knowledge workers and organizations develop and implement Predictive and Generative AI products, on-the-ground in everyday work, to improve decision making, collaboration, and learning.

Headshot of Fabrizio Dell'Acqua

Fabrizio Dell’Acqua is a postdoctoral researcher at Harvard Business School. His research explores how human/AI collaboration reshapes knowledge work: the impact of AI on knowledge workers, its effects on team dynamics and performance, and its broader organizational implications.

Headshot of Ethan Mollick

Ethan Mollick is an Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship, and examines the effects of artificial intelligence on work and education. Ethan is the Co-Director of the Generative AI Lab at Wharton, which builds prototypes and conducts research to discover how AI can help humans thrive while mitigating risks.

Headshot of Francois Candelon

Francois Candelon is Partner Value Creation & Portfolio Monitoring at Seven2.

Headshot of Karim Lakhani

Karim R. Lakhani is the Dorothy & Michael Hintze Professor of Business Administration at Harvard Business School. He specializes in technology management, innovation, digital transformation, and artificial intelligence. He is also the Co-Founder and Faculty Chair of the Digital Data Design (D^3) Institute at Harvard and the Founder and Co-Director of the Laboratory for Innovation Science at Harvard (LISH).

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.