🤖 AI Summary
To address the time-consuming, subjective, and non-personalized nature of multiparametric breast MRI interpretation, this study proposes the Modality-Aware Mixture-of-Experts (MOME) large model. MOME introduces a novel multimodal Transformer architecture jointly processing T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and dynamic contrast-enhanced MRI (DCE-MRI), incorporating modality-specific expert routing and contrastive learning–driven representation alignment to ensure robust inference under missing modalities. It further integrates an explainability module for lesion localization and quantitative attribution of each modality’s contribution. Evaluated on 5,205 Chinese patient cases, MOME achieves performance comparable to expert radiologists across four critical tasks: lesion detection, BI-RADS category 4 false-positive biopsy reduction, triple-negative subtype classification, and neoadjuvant chemotherapy pathologic complete response prediction (AUC = 0.89). This work establishes the first clinically deployable, interpretable, and robust noninvasive framework for personalized breast cancer management grounded in multiparametric MRI.
📝 Abstract
Breast Magnetic Resonance Imaging (MRI) demonstrates the highest sensitivity for breast cancer detection among imaging modalities and is standard practice for high-risk women. Interpreting the multi-sequence MRI is time-consuming and prone to subjective variation. We develop a large mixture-of-modality-experts model (MOME) that integrates multiparametric MRI information within a unified structure, leveraging breast MRI scans from 5,205 female patients in China for model development and validation. MOME matches four senior radiologists' performance in identifying breast cancer and outperforms a junior radiologist. The model is able to reduce unnecessary biopsies in Breast Imaging-Reporting and Data System (BI-RADS) 4 patients, classify triple-negative breast cancer, and predict pathological complete response to neoadjuvant chemotherapy. MOME further supports inference with missing modalities and provides decision explanations by highlighting lesions and measuring modality contributions. To summarize, MOME exemplifies an accurate and robust multimodal model for noninvasive, personalized management of breast cancer patients via multiparametric MRI. Code is available at https://github.com/LLYXC/MOME/tree/main.