Visit hbs.edu

Leading with AI: The Power of AI

On May 7, the Digital Data Design Institute at Harvard hosted Leading with AI: Exploring Business and Technology Frontiers. The conference featured a presentation on the power of AI from Ewa Dรผrr, Head of Product Management for Google Cloud Artificial Intelligence. Based on Googleโ€™s extensive research and development in the AI space, Dรผrr outlined trends in AI, explained how Google is helping to accelerate transformation in AI, and the benefits and risks involved.

Key Insight: Multimodal and Generalized

โ€œAI is becoming a little bit like human beings. It’s more sensing, it’s becoming multimodal, it can understand and has the perception of audio, video, images, all at the same time.โ€

AI is becoming multimodal and generalized, able to detect images, audio, and video. In addition, it is becoming more democratized and is no longer only the domain of engineers and those with specialized knowledge. Today, everyone can interact with AI.

Key Insight: Perception and Sensing

โ€œAI is becoming indistinguishable from humans.โ€

Dรผrr played audio that demonstrated AIโ€™s ability to become emotional โ€” casual, lively, apologetic. By understanding broader context, it can interpret an audienceโ€™s emotional state and use intelligence and reasoning to respond appropriately.

Key Insight: Empowerment and Access

โ€œGoogle [has] the history of bringing AI and embedding that AI within every product that we bring to the market, but as well to give the access to the technology to the other companies so that you can leverage and build on top of that.โ€

As they put AI capabilities into production, enterprises must consider their readiness, plus data privacy, governance, and other protections as part of the process. Google offers a โ€œmodel gardenโ€ of many Google and other vendorsโ€™ tools to enterprises, but to choose a model, companies must consider what business problems they are trying to solve and what success would look like. When identifying a model, companies must also understand their own abilities to fine-tune the model, such as their personnel and data quality.

Key Insight: Possibilities and Warning Signs

โ€œResponsible AI has to be at the core of every single day and every single thing we do within AI.โ€

Dรผrr illustrated the transformative possibilities of AI with a simulation of an in-car voice assistant from Google Cymbal. AI enabled an individualโ€™s trip to work, not just with maps, but by reading and sending emails, sending calendar invites, and making parking and restaurant suggestions and reservations.

However, she noted that with these opportunities and benefits, there are also risks. Dรผrr emphasized the importance of responsible AI, stressing that her team considers how AI can be misused and related solutions, such as:

  • Tooling and grounding: Digital watermarking, privacy and intellectual property protections, citations and recitation checks, and safety and bias filters are examples of solutions that help to ensure the safety of AI input and output.
  • Community: Educational institutions, regulators, governments, and enterprises must work together to drive responsible use of AI and identify best practices.

Dรผrr concluded by reinforcing the need to remember the risks while exploring the opportunities of AI, and noting that โ€œAI is only successful if itโ€™s responsible.โ€

Meet the Speaker

Ewa Dรผrr leads the Product Management team for Cloud Artificial Intelligence at Google in California. She obtained Masterโ€™s degrees from SGH Warsaw School of Economics and Harvard Business School and executive education credentials from Stanford and Harvard Kennedy School.

Additional Resources


Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.