🤖 AI Summary
This work addresses the challenges of multimodal MRI brain tumor segmentation in clinical settings, where missing imaging modalities and insufficient prediction confidence often hinder performance. To enhance model robustness and uncertainty quantification, the authors propose a unified Transformer-based framework that integrates a deterministic backbone with a memory-efficient Bayesian fine-tuning strategy. Key innovations include a zero-initialized multimodal context fusion module, a residual-gated deep supervision mechanism, and voxel-level uncertainty map generation. Experiments on the BraTS 2021 dataset demonstrate that the proposed method significantly improves segmentation stability and boundary accuracy under missing-modality scenarios, effectively reduces Hausdorff distance, and yields reliable estimates of predictive uncertainty.
📝 Abstract
Accurate brain tumor segmentation from multi-modal magnetic resonance imaging (MRI) is a prerequisite for precise radiotherapy planning and surgical navigation. While recent Transformer-based models such as Swin UNETR have achieved impressive benchmark performance, their clinical utility is often compromised by two critical issues: sensitivity to missing modalities (common in clinical practice) and a lack of confidence calibration. Merely chasing higher Dice scores on idealized data fails to meet the safety requirements of real-world medical deployment. In this work, we propose BMDS-Net, a unified framework that prioritizes clinical robustness and trustworthiness over simple metric maximization. Our contribution is three-fold. First, we construct a robust deterministic backbone by integrating a Zero-Init Multimodal Contextual Fusion (MMCF) module and a Residual-Gated Deep Decoder Supervision (DDS) mechanism, enabling stable feature learning and precise boundary delineation with significantly reduced Hausdorff Distance, even under modality corruption. Second, and most importantly, we introduce a memory-efficient Bayesian fine-tuning strategy that transforms the network into a probabilistic predictor, providing voxel-wise uncertainty maps to highlight potential errors for clinicians. Third, comprehensive experiments on the BraTS 2021 dataset demonstrate that BMDS-Net not only maintains competitive accuracy but, more importantly, exhibits superior stability in missing-modality scenarios where baseline models fail. The source code is publicly available at https://github.com/RyanZhou168/BMDS-Net.