🤖 AI Summary
Multimodal MRI often suffers from missing modalities due to acquisition constraints or clinical limitations; however, existing pretraining methods require complete modality inputs and necessitate separate models for each modality combination, severely limiting practicality and scalability. To address this, we propose BM-MAE—a unified 3D multimodal MRI pretraining framework based on masked autoencoding—capable of plug-and-play adaptation to arbitrary modality subsets without architectural modification or subset-specific retraining, while enabling high-fidelity reconstruction of missing modalities. Its core innovation lies in a decoupled design integrating cross-modal attention with modality-specific embeddings, jointly optimizing shared representation learning and modality-specific characteristics. Evaluated on downstream tasks including brain tumor segmentation and classification, BM-MAE significantly outperforms training from scratch and matches or exceeds the performance of dedicated pretrained baselines trained independently for each modality combination.
📝 Abstract
Multimodal magnetic resonance imaging (MRI) constitutes the first line of investigation for clinicians in the care of brain tumors, providing crucial insights for surgery planning, treatment monitoring, and biomarker identification. Pre-training on large datasets have been shown to help models learn transferable representations and adapt with minimal labeled data. This behavior is especially valuable in medical imaging, where annotations are often scarce. However, applying this paradigm to multimodal medical data introduces a challenge: most existing approaches assume that all imaging modalities are available during both pre-training and fine-tuning. In practice, missing modalities often occur due to acquisition issues, specialist unavailability, or specific experimental designs on small in-house datasets. Consequently, a common approach involves training a separate model for each desired modality combination, making the process both resource-intensive and impractical for clinical use. Therefore, we introduce BM-MAE, a masked image modeling pre-training strategy tailored for multimodal MRI data. The same pre-trained model seamlessly adapts to any combination of available modalities, extracting rich representations that capture both intra- and inter-modal information. This allows fine-tuning on any subset of modalities without requiring architectural changes, while still benefiting from a model pre-trained on the full set of modalities. Extensive experiments show that the proposed pre-training strategy outperforms or remains competitive with baselines that require separate pre-training for each modality subset, while substantially surpassing training from scratch on several downstream tasks. Additionally, it can quickly and efficiently reconstruct missing modalities, highlighting its practical value. Code and trained models are available at: https://github.com/Lucas-rbnt/bmmae