Utilizing Longitudinal Chest X-Rays and Reports to Pre-fill Radiology Reports. Academic Article uri icon

Overview

abstract

  • Despite the reduction in turn-around times in radiology reporting with the use of speech recognition software, persistent communication errors can significantly impact the interpretation of radiology reports. Pre-filling a radiology report holds promise in mitigating reporting errors, and despite multiple efforts in literature to generate comprehensive medical reports, there lacks approaches that exploit the longitudinal nature of patient visit records in the MIMIC-CXR dataset. To address this gap, we propose to use longitudinal multi-modal data, i.e., previous patient visit CXR, current visit CXR, and the previous visit report, to pre-fill the "findings" section of the patient's current visit. We first gathered the longitudinal visit information for 26,625 patients from the MIMIC-CXR dataset, and created a new dataset called Longitudinal-MIMIC. With this new dataset, a transformer-based model was trained to capture the multi-modal longitudinal information from patient visit records (CXR images + reports) via a cross-attention-based multi-modal fusion module and a hierarchical memory-driven decoder. In contrast to previous works that only uses current visit data as input to train a model, our work exploits the longitudinal information available to pre-fill the "findings" section of radiology reports. Experiments show that our approach outperforms several recent approaches by ≥3% on F1 score, and ≥2% for BLEU-4, METEOR and ROUGE-L respectively. Code will be published at https://github.com/CelestialShine/Longitudinal-Chest-X-Ray.

publication date

  • October 1, 2023

Identity

PubMed Central ID

  • PMC10947431

Scopus Document Identifier

  • 85174680767

Digital Object Identifier (DOI)

  • 10.1007/978-3-031-43904-9_19

PubMed ID

  • 38501075

Additional Document Info

volume

  • 14224