The Evolution of Radiology Image Annotation in the Era of Large Language Models.
Review
Overview
abstract
-
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Although there are relatively few diverse, high-quality medical imaging datasets on which to train computer vision artificial intelligence models, even fewer datasets contain expertly classified observations that can be repurposed to train or test such models. The traditional annotation process is laborious and time-consuming. Repurposing of annotations and consolidating similar types of annotations from disparate sources has never been practical. Until recently, the use of natural language processing to convert a clinical radiology report into labels required custom training of a language model for each use case. Newer technologies such as large language models have made it possible to generate accurate and normalized labels at scale using only the clinical reports and specific prompt engineering. The combination of automatically generated labels extracted and normalized from reports in conjunction with foundation image models provides a means to create labels for model training. This article provides a short history and review of the annotation and labeling process of medical images, from the traditional manual methods to the newest semiautomated methods that provide a more scalable solution for creating useful models more efficiently. ©RSNA, 2025.
publication date
published in
Identity
Digital Object Identifier (DOI)
PubMed ID