🤖 AI Summary
Addressing the dual challenges of pervasive modality missing and scarce annotated data in medical multimodal imaging, this paper proposes MM-DINOv2: an extension of DINOv2 that introduces multimodal image patch embedding and a full-modality masking mechanism to enable cross-modal representation learning, integrated with a semi-supervised training paradigm to effectively leverage large-scale unlabeled data. The framework significantly enhances model robustness and generalization under clinically realistic incomplete inputs. On glioma subtype classification, it achieves a Matthews Correlation Coefficient of 0.60 on an external test set—outperforming the current state-of-the-art supervised method by 11.1 percentage points. To our knowledge, this is the first work to systematically adapt self-supervised vision foundation models to multimodal medical imaging for missing-modality modeling and semi-supervised learning, establishing a scalable, robust analytical paradigm for low-resource clinical settings.
📝 Abstract
Vision foundation models like DINOv2 demonstrate remarkable potential in medical imaging despite their origin in natural image domains. However, their design inherently works best for uni-modal image analysis, limiting their effectiveness for multi-modal imaging tasks that are common in many medical fields, such as neurology and oncology. While supervised models perform well in this setting, they fail to leverage unlabeled datasets and struggle with missing modalities, a frequent challenge in clinical settings. To bridge these gaps, we introduce MM-DINOv2, a novel and efficient framework that adapts the pre-trained vision foundation model DINOv2 for multi-modal medical imaging. Our approach incorporates multi-modal patch embeddings, enabling vision foundation models to effectively process multi-modal imaging data. To address missing modalities, we employ full-modality masking, which encourages the model to learn robust cross-modality relationships. Furthermore, we leverage semi-supervised learning to harness large unlabeled datasets, enhancing both the accuracy and reliability of medical predictions. Applied to glioma subtype classification from multi-sequence brain MRI, our method achieves a Matthews Correlation Coefficient (MCC) of 0.6 on an external test set, surpassing state-of-the-art supervised approaches by +11.1%. Our work establishes a scalable and robust solution for multi-modal medical imaging tasks, leveraging powerful vision foundation models pre-trained on natural images while addressing real-world clinical challenges such as missing data and limited annotations.