Scaling AI for Hospitals and Healthcare Providers

Insights from the March 5th, 2024 session on Gen AI use cases among Healthcare Providers

Recordings from the Generative AI in Healthcare series can be found here –
session 1, session 2, session 3, and session 4

In the fourth session of the Generative AI in Healthcare series, speakers Nikhil Bhojwani (Recon Strategy) and Satish Tadikonda (HBS) outlined the role of generative AI for hospitals and healthcare providers, followed by an engaging panel discussion with guest experts Marc Succi, Alexandre Momeni, Frederik Bay, and Timothy Driscoll, who shared their perspectives on current and future AI applications in the space.

Current Landscape

Nikhil Bhojwani discussed the potential applications of AI within the platform of health systems, emphasizing its role in various areas including clinical work, education, research, patient interaction, revenue cycle management, interoperability, and general organizational functions. Even within the complexity of health systems, there exist a multitude of opportunities for AI to augment, substitute, or support human activities across different departments and functions. Nikhil and Satish encouraged further exploration of specific use cases within each domain and invited the panelists to share their perspectives on the practical applications of AI in the context of hospitals and providers.

Opportunities in Digital Health

Marc Succi discussed various opportunities within Mass General Brigham, ranging from low-risk to high-risk endeavors, with differing timelines for implementation and adoption. While certain initiatives like streamlined prior authorization are already being implemented, more disruptive concepts such as clinical workflow and decision support are expected to take longer. Succi emphasized the importance of ensuring equity, enhancing patient experience, and addressing healthcare worker burnout in the implementation of AI technologies. Alexandre Momeni of General Catalyst further elaborated on three ways health systems can utilize AI: for innovation, transformation, and efficiency. He discussed the regulatory frameworks surrounding AI in clinical decision support and highlighted the potential for AI to significantly impact healthcare workflows. Boston Children’s Hospital’s Timothy Driscoll also outlined the institution’s numerous applications of AI, including operational efficiency, clinical decision support, research, education, and patient care, stressing the importance of responsible AI development and governance structures to maximize its benefits in healthcare settings. Frederick Bay also discussed Adobe’s focus on patient engagement and digital marketing expertise, utilizing generative AI to overcome traditional barriers to adoption in healthcare systems. He highlighted opportunities for personalized engagement and document management, including faster document creation, insights extraction from existing data, as well as image tagging and labeling, but clarified they were not currently focused on the clinical side of radiology due to security and legal considerations.

Strategic AI Implementation

Timothy Driscoll then described his strategic approach to the AI portfolio at Boston Children’s Hospital, focusing on objectives such as demonstrating AI’s impact on care quality, ensuring ethical and sustainable use, and driving efficiency and expertise. The hospital holds itself to key principles of diversity, fairness, accountability, and robust governance, fostering a commitment to inclusive and transparent AI development. Driscoll also discussed specific areas where AI drove value, including diagnostic support models and synthesizing complex patient data for frontline staff. He noted a phased approach to implementation, building foundational capabilities, defining prioritization frameworks, and rapidly scaling high-impact use cases. When asked about the use of synthetic data and compliance, Driscoll explained his team’s focus on leveraging actual patient data, but acknowledged scenarios where synthetic data was used for intelligent automations, such as resume scanning. Marc Succi also shared Mass General Brigham’s approach to AI adoption, outlining the importance of research and validation through their data science office. He noted the challenges of FDA approval versus actual adoption in patient care, which raises the need for socialization and education within the healthcare community. Succi discussed the deployment of low-risk tools to familiarize users with AI concepts and mentioned ongoing investigations into clinical decision support algorithms, noting the impact on operational use cases in clinical settings. 

Risk and Responsibility

To close out the session, Nikhil Bhojwani shared some of the unique risks related to irresponsible AI use in healthcare, referring to the Responsible AI Institute’s framework to categorize risks. Among the examples Bhojwani gave were inaccuracies in AI-generated notes by scribes, safety concerns regarding AI-driven drug delivery systems, resilience issues with predictive models like sepsis detection, accountability challenges in AI recommendations, explainability difficulties, privacy risks from de-anonymization of data, and fairness concerns due to biases in training data. These use cases illustrate the multifaceted nature of AI risks in provider systems and underscore the need for robust solutions to ensure responsible implementation.

Momeni added that trust in AI systems is of utmost importance and suggested three key considerations: the degree of automation, benchmarking and evaluation methods, and the establishment of industry standards. Bay noted that transparency and governance processes are also key to establishing trust in AI development, while Succi and Driscoll both emphasized the importance of checks and balances to ensure responsible use. As an example, they mentioned existing practices where physicians review AI-generated notes and reports, driving home the consensus that human accountability remains crucial, especially with potential concerns about over-reliance on AI. The panel agreed that with robust accountability mechanisms in place, such tools could be used to vastly improve the experience of both patients and providers.

The Gen AI in Healthcare series is collaboratively produced by Harvard’s Digital, Data, Design (D^3) Institute and the Responsible AI Institute.

About Responsible AI Institute

Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. RAI Institute’s conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of AI products. Members include ATB Financial, Amazon Web Services, Boston Consulting Group, Yum! Brands and many other leading companies and institutions collaborate with RAI Institute to bring responsible AI to all industry sectors.

Engage With Us

Join Our Community

Ready to dive deeper with the Digital Data Design Institute at Harvard? Subscribe to our newsletter, contribute to the conversation and begin to invent the future for yourself, your business and society as a whole.