Visit hbs.edu

D^3 Associates Spotlight Series: Alex Chan

This series introduces D^3 Associates Program projects which aim to answer important questions at the intersection of artificial intelligence and digital technologies in business and society.

This article shares insights from Alex Chan, Assistant Professor of Business Administration in the Negotiation, Organizations & Markets Unit at Harvard Business School who is pursuing research on the topics of artificial intelligence and organizations.

  1. What drew you to this area of research and how did you first become involved in this work?

My background spans both technology and healthcare, which naturally pulled me toward the “engineering” side of economics—market design. I became fascinated by how small changes in market rules, incentives, or even information presentation can meaningfully shape human behavior—sometimes with life-or-death consequences in settings like healthcare and organ allocation.

My interest in AI grew from two directions. On the research side, I worked early on questions around deep learning’s ability to extract patient preferences and clinically relevant signals from unstructured data like clinical notes. On the applied side, my time in industry deploying AI-enabled healthcare products made the promise—and the risk—very concrete: technology can match expert performance and save enormous amounts of time, but it also changes how people make decisions and how accountability is assigned. That combination convinced me that one of the next major market design challenges is not just building better AI systems, but integrating AI into human decision-making environments in ways that are robust, incentive-compatible, and ultimately welfare-improving—especially as we think ahead to more advanced systems.

2. What are some common misconceptions or barriers around the problem you’re working to solve?

A major misconception is that “more information” automatically leads to better decisions. In the context of Explainable AI (XAI), for instance, many people assume that if you provide an explanation, decision-makers will naturally use it to make fairer, better choices. But in practice, transparency can create strategic discomfort: explanations can reveal biases, conflicts of interest, or decision rules that stakeholders would rather not surface—especially when there are financial incentives, reputational concerns, or legal exposure at stake.

One barrier, then, is that people may strategically prefer “black-box” systems—not because they love opacity, but because opacity can protect them from scrutiny or responsibility. Another barrier is that we often forecast AI’s societal impact by linearly extrapolating from recent waves of automation. That framing can miss how AI will reshape how preferences are expressed, how trust is formed, and how institutions evolve when cognition, forecasting, and persuasion become more scalable and more delegated to machines.

3. What research is being done on this topic and how is your approach or perspective unique?

A lot of the current research rightly focuses on the technical “how-to” of AI—building more accurate models, improving interpretability methods, and optimizing performance. My perspective is complementary: I treat AI as a participant in a market or organization rather than simply a tool. That means I focus on how AI systems interact with incentives, power, accountability, and human behavior—often in ways that aren’t visible if we only measure technical accuracy.

For example, in my working paper “Preference for Explanations: The Case of XAI,” I don’t just ask whether an AI can explain itself—I ask whether people actually want explanations when real incentives and tradeoffs are present. Using incentivized experiments with real financial stakes helps reveal when transparency is demanded, when it’s avoided, and why.

More broadly, by combining market design and behavioral economics, I can study how AI decision-support, monitoring, or recommendation systems interact with factors like gender, race, hierarchy, and institutional constraints—dimensions that pure computer science approaches often treat as “downstream” but that frequently determine real-world outcomes. Market design also pushes us to analyze markets that don’t fully exist yet, which is increasingly important as AI changes what it even means to “participate” in a market.

4. What excites you most about this work and its potential impact?

What excites me most is the possibility of moving beyond the idea that AI progress is mainly about better prediction—and toward the idea that progress is about better systems. If we design incentives and institutions well, AI can reduce cognitive overload, improve access to expertise, and make high-stakes decisions more consistent and less arbitrary. In healthcare, that can translate into better triage, more equitable access, reduced clinician burnout, and ultimately better patient outcomes.

At the same time, I’m excited by the intellectual challenge: AI changes the “rules of the game” in markets and organizations. We now have decision-makers who can delegate judgment to models, organizations that can scale monitoring and evaluation, and environments where explanations can be demanded, ignored, weaponized, or strategically suppressed. Understanding those dynamics—and designing mechanisms that make good outcomes more likely—feels both urgent and deeply consequential.

5. How do you hope working with D^3 will amplify the impact of your work?

D^3 is an ideal home for this kind of research because it brings together technologists, economists, organizational scholars, and practitioners who are grappling with the same reality from different angles. I see D^3 as a “translation layer” between theory and deployment: a place where questions about incentives, governance, and real-world adoption can be stress-tested against how organizations actually operate.

I also hope D^3 will amplify impact through its convening power and practitioner ecosystem—helping connect research insights to real institutional design decisions, from product development and auditing to policy, procurement, and organizational governance. When the goal is not just to understand AI, but to shape how it’s used responsibly and effectively, that cross-disciplinary and real-world engagement is invaluable.

6. What changes do you hope to see in your field as a result of the work being done in this area?

I hope to see market design become a central lens for thinking about AI, including advanced systems that may begin to act more like autonomous agents in the economy. Rather than relying primarily on after-the-fact regulation or patchwork compliance, I want to see organizations design digital ecosystems from the ground up with incentives that support transparency, productivity, and fairness simultaneously.

In practical terms, that means shifting from “Can we build this model?” to “What behavior does this system produce once it’s embedded in an institution with real incentives?” It also means building stronger evidence around what kinds of transparency and accountability mechanisms actually work—not just in principle, but in practice.

7. What’s an essential area in which AI and digital technologies will reshape the way businesses or society operate in the long run that we may not be considering?

One underappreciated shift is that AI won’t just replace tasks—it will reshape the institutional infrastructure through which preferences, negotiations, and decisions happen. As personal AI agents become more common—agents that summarize options, negotiate on our behalf, filter information, and even execute transactions—markets may increasingly become “agent-to-agent.” That changes what it means to have a preference, how trust is built, and how persuasion and manipulation operate at scale.

This raises foundational design questions:

  • How do we represent and protect human preferences when they’re expressed through intermediating AI systems?
  • What new markets and norms emerge when AI can cheaply generate convincing arguments, tailored messaging, or strategic explanations?
  • What does accountability look like when decisions are the output of human-AI teams—or of automated negotiations between agents?

In the long run, the big opportunity (and challenge) is designing the mechanisms—identity, provenance, incentives, auditing, governance—that make delegation to AI socially beneficial rather than destabilizing. That’s where market design and institutional thinking become essential.

The D^3 Associates Program supports and accelerates faculty research into the ways AI and digital technologies are reshaping companies, organizations, society, and practice.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.