🤖 AI Summary
Missing modalities in multi-modal MRI—due to poor image quality, inconsistent acquisition protocols, patient contraindications, or cost constraints—severely degrade brain tumor segmentation performance.
Method: We propose a robust single-modal parallel segmentation framework that achieves high-accuracy segmentation using only one input MRI modality (e.g., T1, T2, FLAIR, or T1c). It incorporates dynamic parameter adaptation and modality-adaptive feature compensation to mitigate modality-specific information loss. Crucially, we introduce a novel knowledge transfer strategy jointly regularized by Hölder divergence and mutual information, preserving modality specificity while enhancing cross-modal semantic consistency.
Results: Evaluated on BraTS 2018 and 2020 benchmarks under diverse modality-missing scenarios, our method consistently outperforms state-of-the-art approaches, achieving Dice score improvements of 2.1–4.7 percentage points. It demonstrates superior robustness and generalization across heterogeneous clinical settings.
📝 Abstract
Multimodal MRI provides critical complementary information for accurate brain tumor segmentation. However, conventional methods struggle when certain modalities are missing due to issues such as image quality, protocol inconsistencies, patient allergies, or financial constraints. To address this, we propose a robust single-modality parallel processing framework that achieves high segmentation accuracy even with incomplete modalities. Leveraging Holder divergence and mutual information, our model maintains modality-specific features while dynamically adjusting network parameters based on the available inputs. By using these divergence- and information-based loss functions, the framework effectively quantifies discrepancies between predictions and ground-truth labels, resulting in consistently accurate segmentation. Extensive evaluations on the BraTS 2018 and BraTS 2020 datasets demonstrate superior performance over existing methods in handling missing modalities.