Visit hbs.edu

Why AI can’t afford to discount diversity

Colorful folded hands

How can businesses protect sensitive user data to foster trust among consumers? How can they design products and tools that avoid racial, gender, and other biases towards users? How should they decide who uses their products, and for what purpose?

The answers to these and other questions are especially pertinent to the developers of artificial intelligence. And they require the input and expertise of not just computer scientists and engineers, but also lawyers, philosophers, business leaders, and regulators. Fostering productive and ongoing dialogue between these groups is essential.

The Digital Initiative at Harvard Business School provided one such forum at the recent Digital Transformation Summit: AI, Ethics, and Business Decisions. Speakers discussed the challenges of integrating ethics into routine decision making, how law and policy will have to adapt to the changing realities of AI, and how some pioneering companies are navigating this uncertain environment.

“…it’s clear that the potential applications—and hazards—of AI are both serious and sometimes difficult to predict.”

It’s no secret that AI has come under public scrutiny in recent years. From the role of Facebook algorithms in escalating violence and genocide in Myanmar, to the fatal accident involving an Uber autonomous vehicle, it’s clear that the potential applications—and hazards—of AI are both serious and sometimes difficult to predict. According to Dr. Cansu Canca, founder and director of the AI Ethics Lab, companies must work proactively to identify, understand, analyze, and implement solutions to ethical concerns surrounding AI at every stage of the product design and development process.

To do this, Canca says, “we need to train researchers and developers to engage in this kind of structured thinking when they are dealing with ethics questions… so that they can identify these problems early on as they arise and tackle them in real time, at each stage of the innovation process.” This approach aims to avoid unethical outcomes while enhancing the technology and avoiding costly redesigns and product delays. Perhaps most importantly, Canca explains, it is a proactive rather than reactive system—outlining a framework for researchers to take forward as they confront the multi-faceted and evolving challenges posed by AI technology, rather than responding on a case-by-case basis.

Just as companies must adapt to the ethical issues posed by AI, so too does our legal system. If an Uber AV crashes into a human-operated vehicle making a left turn, who is to blame? Does your answer change if you know that the AV had the right of way, but the driver’s situational knowledge informed her decision to make the turn? According to Matthew Wansley, general counsel at nuTonomy, “jurors are going to increasingly find that moral intuition cannot generate a determinate answer” in these kinds of situations. In other words, it’s easy to assess the liability of a drunk or otherwise impaired driver—less so for an AV whose only failing might be imperfect or incomplete coding. So how can, or should, the legal system respond? “There is no obvious and easy solution,” Wansley says. Wansley suggests that tort law will need to evolve, possibly by relying less on lay juries to adjudicate liability.

The legal gray area surrounding AI is part of the reason that so much responsibility for safety and accountability currently falls to the companies at the forefront of this technology. Dr. Rana el Kaliouby, CEO and co-founder of Affectiva, is taking this responsibility very seriously, and she is on a mission to popularize what she terms “artificial emotional intelligence” or “Emotion AI.” Why is this so important? As el Kaliouby explains, AI is becoming increasingly present in our daily lives, and whether we realize it or not, “we are forming a new kind of partnership between humans and machines.” “This partnership,” she asserts, “requires a new social contract based on mutual trust.”

El Kaliouby believes part of the reason AI has experienced so many high-profile failures is that while the technology has a very high IQ, it possesses no empathy or emotional intelligence. “That’s the missing link,” she says—emotionally intelligent technology that can read the nuance of human facial expressions and emotional cues to achieve a deeper understanding of its user or operator. At Affectiva, el Kaliouby and her team have collected and analyzed data from over 7.8 million faces from 87 countries in order to develop software that can do just this. Such a large and diverse data set is essential to teach machines to read the incredibly varied and nuanced spectrum of human states in order to avoid algorithmic bias.

“The legal gray area surrounding AI is part of the reason that so much responsibility for safety and accountability currently falls to the companies at the forefront of this technology.”

But development is only one part of this social contract. With a technology so ubiquitous as AI, the use cases are seemingly endless—however they are not all equal. As el Kaliouby explains, “technology is neutral.” The same software that can help improve automotive safety and make advances in mental health could also potentially be used to manipulate and discriminate against users.

That’s why, el Kaliouby recounts, Affectiva turned down a $40 million investment from a security agency in 2011. “I asked myself,” she said, “do I want to spend my mindshare, and my team’s mindshare, on a problem where we’re not building trust and respecting users?” The company—as a team—decided that the answer was no. A commitment to inclusion and value-based decision making is key for companies faced with the decision of how to apply their technology, and who to partner with. Indeed, in recent years, many companies—including Google, Microsoft, and others—have weathered employee protests over the licensing of products to security and surveillance agencies.

So although the path ahead for AI is far from clear, what’s certain is that beyond management, employees—and even consumers—will play an active role in deciding the future of this powerful technology—a testament, again, to the need for a diverse set of views, perspectives, and backgrounds to solve these challenging problems.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.