Towards responsible artificial intelligence in healthcare-getting real about real-world data and evidence.
Academic Article
Overview
abstract
BACKGROUND: The use of real-world data (RWD) in artificial intelligence (AI) applications for healthcare offers unique opportunities but also poses complex challenges related to interpretability, transparency, safety, efficacy, bias, equity, privacy, ethics, accountability, and stakeholder engagement. METHODS: A multi-stakeholder expert panel comprising healthcare professionals, AI developers, policymakers, and other stakeholders was assembled. Their task was to identify critical issues and formulate consensus recommendations, focusing on the responsible use of RWD in healthcare AI. The panel's work involved an in-person conference and workshop and extensive deliberations over several months. RESULTS: The panel's findings revealed several critical challenges, including the necessity for data literacy and documentation, the identification and mitigation of bias, privacy and ethics considerations, and the absence of an accountability structure for stakeholder management. To address these, the panel proposed a series of recommendations, such as the adoption of metadata standards for RWD sources, the development of transparency frameworks and instructional labels likened to "nutrition labels" for AI applications, the provision of cross-disciplinary training materials, the implementation of bias detection and mitigation strategies, and the establishment of ongoing monitoring and update processes. CONCLUSION: Guidelines and resources focused on the responsible use of RWD in healthcare AI are essential for developing safe, effective, equitable, and trustworthy applications. The proposed recommendations provide a foundation for a comprehensive framework addressing the entire lifecycle of healthcare AI, emphasizing the importance of documentation, training, transparency, accountability, and multi-stakeholder engagement.