On May 7, the Digital Data Design Institute at Harvard hosted Leading with AI: Exploring Business and Technology Frontiers. The conference featured a presentation on the power of AI from Ewa Dürr, Head of Product Management for Google Cloud Artificial Intelligence. Based on Google’s extensive research and development in the AI space, Dürr outlined trends in AI, explained how Google is helping to accelerate transformation in AI, and the benefits and risks involved.
Key Insight: Multimodal and Generalized
AI is becoming multimodal and generalized, able to detect images, audio, and video. In addition, it is becoming more democratized and is no longer only the domain of engineers and those with specialized knowledge. Today, everyone can interact with AI.
Key Insight: Perception and Sensing
Dürr played audio that demonstrated AI’s ability to become emotional — casual, lively, apologetic. By understanding broader context, it can interpret an audience’s emotional state and use intelligence and reasoning to respond appropriately.
Key Insight: Empowerment and Access
As they put AI capabilities into production, enterprises must consider their readiness, plus data privacy, governance, and other protections as part of the process. Google offers a “model garden” of many Google and other vendors’ tools to enterprises, but to choose a model, companies must consider what business problems they are trying to solve and what success would look like. When identifying a model, companies must also understand their own abilities to fine-tune the model, such as their personnel and data quality.
Key Insight: Possibilities and Warning Signs
Dürr illustrated the transformative possibilities of AI with a simulation of an in-car voice assistant from Google Cymbal. AI enabled an individual’s trip to work, not just with maps, but by reading and sending emails, sending calendar invites, and making parking and restaurant suggestions and reservations.
However, she noted that with these opportunities and benefits, there are also risks. Dürr emphasized the importance of responsible AI, stressing that her team considers how AI can be misused and related solutions, such as:
- Tooling and grounding: Digital watermarking, privacy and intellectual property protections, citations and recitation checks, and safety and bias filters are examples of solutions that help to ensure the safety of AI input and output.
- Community: Educational institutions, regulators, governments, and enterprises must work together to drive responsible use of AI and identify best practices.
Dürr concluded by reinforcing the need to remember the risks while exploring the opportunities of AI, and noting that “AI is only successful if it’s responsible.”
Meet the Speaker
Ewa Dürr leads the Product Management team for Cloud Artificial Intelligence at Google in California. She obtained Master’s degrees from SGH Warsaw School of Economics and Harvard Business School and executive education credentials from Stanford and Harvard Kennedy School.
Additional Resources
- Successful AI Means Responsible AI (DLD News article) – How AI can advance humanity and benefit society as a whole through responsible use.