A practical guide to the implementation of AI in orthopaedic research, Part 6: How to evaluate the performance of AI research? Review uri icon

Overview

abstract

  • UNLABELLED: Artificial intelligence's (AI) accelerating progress demands rigorous evaluation standards to ensure safe, effective integration into healthcare's high-stakes decisions. As AI increasingly enables prediction, analysis and judgement capabilities relevant to medicine, proper evaluation and interpretation are indispensable. Erroneous AI could endanger patients; thus, developing, validating and deploying medical AI demands adhering to strict, transparent standards centred on safety, ethics and responsible oversight. Core considerations include assessing performance on diverse real-world data, collaborating with domain experts, confirming model reliability and limitations, and advancing interpretability. Thoughtful selection of evaluation metrics suited to the clinical context along with testing on diverse data sets representing different populations improves generalisability. Partnering software engineers, data scientists and medical practitioners ground assessment in real needs. Journals must uphold reporting standards matching AI's societal impacts. With rigorous, holistic evaluation frameworks, AI can progress towards expanding healthcare access and quality. LEVEL OF EVIDENCE: Level V.

authors

  • Oettl, Felix C
  • Pareek, Ayoosh
  • Winkler, Philipp W
  • Zsidai, Bálint
  • Pruneski, James A
  • Senorski, Eric Hamrin
  • Kopf, Sebastian
  • Ley, Christophe
  • Herbst, Elmar
  • Oeding, Jacob F
  • Grassi, Alberto
  • Hirschmann, Michael T
  • Musahl, Volker
  • Samuelsson, Kristian
  • Tischer, Thomas
  • Feldt, Robert

publication date

  • May 31, 2024

Identity

PubMed Central ID

  • PMC11141501

Scopus Document Identifier

  • 85195108728

Digital Object Identifier (DOI)

  • 10.1002/jeo2.12039

PubMed ID

  • 38826500

Additional Document Info

volume

  • 11

issue

  • 3