MM-DINOv2: Adapting Foundation Models for Multi-Modal Medical Image Analysis

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the dual challenges of pervasive modality missing and scarce annotated data in medical multimodal imaging, this paper proposes MM-DINOv2: an extension of DINOv2 that introduces multimodal image patch embedding and a full-modality masking mechanism to enable cross-modal representation learning, integrated with a semi-supervised training paradigm to effectively leverage large-scale unlabeled data. The framework significantly enhances model robustness and generalization under clinically realistic incomplete inputs. On glioma subtype classification, it achieves a Matthews Correlation Coefficient of 0.60 on an external test set—outperforming the current state-of-the-art supervised method by 11.1 percentage points. To our knowledge, this is the first work to systematically adapt self-supervised vision foundation models to multimodal medical imaging for missing-modality modeling and semi-supervised learning, establishing a scalable, robust analytical paradigm for low-resource clinical settings.

Technology Category

Application Category

📝 Abstract
Vision foundation models like DINOv2 demonstrate remarkable potential in medical imaging despite their origin in natural image domains. However, their design inherently works best for uni-modal image analysis, limiting their effectiveness for multi-modal imaging tasks that are common in many medical fields, such as neurology and oncology. While supervised models perform well in this setting, they fail to leverage unlabeled datasets and struggle with missing modalities, a frequent challenge in clinical settings. To bridge these gaps, we introduce MM-DINOv2, a novel and efficient framework that adapts the pre-trained vision foundation model DINOv2 for multi-modal medical imaging. Our approach incorporates multi-modal patch embeddings, enabling vision foundation models to effectively process multi-modal imaging data. To address missing modalities, we employ full-modality masking, which encourages the model to learn robust cross-modality relationships. Furthermore, we leverage semi-supervised learning to harness large unlabeled datasets, enhancing both the accuracy and reliability of medical predictions. Applied to glioma subtype classification from multi-sequence brain MRI, our method achieves a Matthews Correlation Coefficient (MCC) of 0.6 on an external test set, surpassing state-of-the-art supervised approaches by +11.1%. Our work establishes a scalable and robust solution for multi-modal medical imaging tasks, leveraging powerful vision foundation models pre-trained on natural images while addressing real-world clinical challenges such as missing data and limited annotations.
Problem

Research questions and friction points this paper is trying to address.

Adapting DINOv2 for multi-modal medical imaging tasks
Addressing missing modalities in clinical imaging data
Leveraging unlabeled datasets through semi-supervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal patch embeddings for processing diverse imaging data
Full-modality masking to handle missing clinical modalities
Semi-supervised learning leveraging large unlabeled datasets
🔎 Similar Papers
No similar papers found.
D
Daniel Scholz
Chair for AI for Image-Guided Diagnosis and Therapy, Technical University of Munich (TUM) and TUM University Hospital, Munich, Germany
Ayhan Can Erdur
Ayhan Can Erdur
Technical University of Munich
Deep LearningComputer VisionMedical Imaging3D SegmentationSurvival Analysis
Viktoria Ehm
Viktoria Ehm
Technical University of Munich
Computer Vision3D Shape AnalysisShape Matching
Anke Meyer-Baese
Anke Meyer-Baese
Professor of Scientific Computing, Florida State University
Medical imagingelectronics and electricalcomputer scienceneuroscience
J
Jan C. Peeken
Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
Daniel Rueckert
Daniel Rueckert
Technical University of Munich and Imperial College London
Machine LearningMedical Image ComputingBiomedical Image AnalysisComputer Vision
B
Benedikt Wiestler
Munich Center for Machine Learning (MCML), Munich, Germany