A Dual-Modality Ultrasound Video Recognition Model for Distinguishing Subpleural Pulmonary Nodules.
Academic Article
Overview
abstract
OBJECTIVE: To develop a deep learning model based on dual-modality ultrasound (DMUS) video recognition for the differential diagnosis of benign and malignant subpleural pulmonary nodules (SPNs). PATIENTS AND METHODS: Participant data (n=193, median age, 58 years [IQR, 34-66 years]; 123 men) with SPNs, prospectively collected from January 7, to December 21, 2020, were divided into training (n=154) and validation (n=39) sets in an 8:2 ratio. Additionally, independent internal (n=88) and external (n=91) test sets were prospectively collected from January 10 to June 25, 2021. The nature of the SPNs was determined through biopsy (n=306) and clinical follow-up (n=66). Our model integrated DMUS videos, time-intensity curves, and clinical information. The model's performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity and compared with state-of-the-art video classification models, as well as ultrasound and computed tomography diagnoses made by radiologists. RESULTS: In the internal test set, our model accurately distinguished malignant from benign SPNs with an AUC, accuracy, sensitivity, and specificity of 0.91, 91% (80 of 88), 90% (27 of 30), and 91% (53 of 58), outperforming state-of-the-art video classification models (all P<.05). In the external test set, the model achieved the accuracy, sensitivity, and specificity of 89% (81 of 91), 84% (27 of 32), and 92% (54 of 59), which were higher than the parameters for radiologist interpretations of ultrasound (81% [74 of 91], 63% [20 of 32], and 92% [54 of 59]) and computed tomography (76% [69 of 91], 91% [29 of 32], and 68% [40 of 59]), respectively. CONCLUSION: This deep learning model based on DMUS video recognition enhances the performance of ultrasound in differentiating benign from malignant SPNs. TRIAL REGISTRATION: clinicaltrials.gov Identifier: ChiCTR1800019828.