A Multimodal Deep Learning Model for the Classification of Breast Cancer Subtypes.
Academic Article
Overview
abstract
Background: Breast cancer is a heterogeneous disease with distinct molecular subtypes, each requiring tailored therapeutic strategies. Accurate classification of these subtypes is crucial for optimizing treatment and improving patient outcomes. While immunohistochemistry remains the gold standard for subtyping, it is invasive and may not fully capture tumor heterogeneity. Artificial Intelligence (AI), particularly Deep Learning (DL), offers a promising non-invasive alternative by analyzing medical imaging data. Methods: In this study, we propose a multimodal DL model that integrates mammography images with clinical metadata to classify breast lesions into five categories: benign, luminal A, luminal B, HER2-enriched, and triple-negative. Using the publicly available Chinese Mammography Database (CMMD), our model was trained and evaluated on a dataset of 4056 images from 1775 patients. Results: The proposed multimodal approach significantly outperformed a unimodal model based solely on mammography images, achieving an AUC of 88.87% for multiclass classification of these five categories, compared to 61.3% AUC for the unimodal model. Conclusions: These findings highlight the potential of multimodal AI-driven approaches for non-invasive breast cancer subtype classification, paving the way for improved diagnostic precision and personalized treatment strategies.