Visit hbs.edu

Leading with AI: The Power of AI

On May 7, the Digital Data Design Institute at Harvard hosted Leading with AI: Exploring Business and Technology Frontiers. The conference featured a presentation on the power of AI from Ewa Dürr, Head of Product Management for Google Cloud Artificial Intelligence. Based on Google’s extensive research and development in the AI space, Dürr outlined trends in AI, explained how Google is helping to accelerate transformation in AI, and the benefits and risks involved.

Key Insight: Multimodal and Generalized

“AI is becoming a little bit like human beings. It’s more sensing, it’s becoming multimodal, it can understand and has the perception of audio, video, images, all at the same time.”

AI is becoming multimodal and generalized, able to detect images, audio, and video. In addition, it is becoming more democratized and is no longer only the domain of engineers and those with specialized knowledge. Today, everyone can interact with AI.

Key Insight: Perception and Sensing

“AI is becoming indistinguishable from humans.”

Dürr played audio that demonstrated AI’s ability to become emotional — casual, lively, apologetic. By understanding broader context, it can interpret an audience’s emotional state and use intelligence and reasoning to respond appropriately.

Key Insight: Empowerment and Access

“Google [has] the history of bringing AI and embedding that AI within every product that we bring to the market, but as well to give the access to the technology to the other companies so that you can leverage and build on top of that.”

As they put AI capabilities into production, enterprises must consider their readiness, plus data privacy, governance, and other protections as part of the process. Google offers a “model garden” of many Google and other vendors’ tools to enterprises, but to choose a model, companies must consider what business problems they are trying to solve and what success would look like. When identifying a model, companies must also understand their own abilities to fine-tune the model, such as their personnel and data quality.

Key Insight: Possibilities and Warning Signs

“Responsible AI has to be at the core of every single day and every single thing we do within AI.”

Dürr illustrated the transformative possibilities of AI with a simulation of an in-car voice assistant from Google Cymbal. AI enabled an individual’s trip to work, not just with maps, but by reading and sending emails, sending calendar invites, and making parking and restaurant suggestions and reservations.

However, she noted that with these opportunities and benefits, there are also risks. Dürr emphasized the importance of responsible AI, stressing that her team considers how AI can be misused and related solutions, such as:

  • Tooling and grounding: Digital watermarking, privacy and intellectual property protections, citations and recitation checks, and safety and bias filters are examples of solutions that help to ensure the safety of AI input and output.
  • Community: Educational institutions, regulators, governments, and enterprises must work together to drive responsible use of AI and identify best practices.

Dürr concluded by reinforcing the need to remember the risks while exploring the opportunities of AI, and noting that “AI is only successful if it’s responsible.”

Meet the Speaker

Ewa Dürr leads the Product Management team for Cloud Artificial Intelligence at Google in California. She obtained Master’s degrees from SGH Warsaw School of Economics and Harvard Business School and executive education credentials from Stanford and Harvard Kennedy School.

Additional Resources


Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.