Toward responsible AI governance: Balancing multi-stakeholder perspectives on AI in healthcare.
Academic Article
Overview
abstract
INTRODUCTION: The rapid integration of artificial intelligence (AI) into healthcare presents significant governance challenges, requiring balanced approaches that safeguard safety, efficacy, equity, and trust (SEET). This study proposes a cognitive framework to guide AI governance, addressing tradeoffs between speed, scope, and capability. OBJECTIVE: To develop a structured governance model that harmonizes stakeholder perspectives, focusing on multi-dimensional challenges and ethical principles essential for AI in healthcare. METHODS: A multidisciplinary team convened at the Blueprints for Trust conference, organized by the American Medical Informatics Association (AMIA), and the Division of Clinical Informatics at Beth Israel Deaconess Medical Center. Following extensive discussions with 190 participants across sectors, three governance models were identified to address specific domains: (1) Clinical Decision Support (CDS), (2) Real-World Evidence (RWE), (3) Consumer Health (CH). RESULTS: Three governance models emerged, tailored to CDS, RWE, and CH domains. Key recommendations include establishing a Health AI Consumer Consortium for patient-centered oversight, initiating voluntary accreditation and certification frameworks, and piloting risk-level-based standards. These models balance rapid adaptation with SEET-focused safeguards through transparency, inclusivity, and ongoing learning. CONCLUSION: A proactive, constraint-based governance framework is critical for responsible AI integration in healthcare. This structured, multi-stakeholder approach provides a roadmap for ethical, transparent governance that can evolve with technological advancements, enhancing trust and safety in healthcare AI applications.