MRI-CORE: A Foundation Model for Magnetic Resonance Imaging

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training MRI models typically requires large-scale annotated datasets, yet medical image annotation is costly and constrained by privacy concerns. Method: We propose MRI-CORE, a vision foundation model for MRI, trained on over 6 million unlabeled 3D MRI slices from 110,000 subjects across 18 anatomical regions, multiple centers, and diverse acquisition sequences. Leveraging a masked autoencoder architecture, we introduce 3D spatially aware positional encoding and cross-volume contrastive learning to enable unified representation learning across centers, anatomical regions, and sequences. Contribution/Results: MRI-CORE supports zero-shot segmentation and classification of image metadata (anatomical region, sequence type, acquisition site), breaking away from task-specific paradigms. On five segmentation benchmarks, fine-tuning with only 10 annotated slices per task improves mean 3D Dice score by 6.97%, demonstrating substantial gains in few-shot generalization.

Technology Category

Application Category

📝 Abstract
The widespread use of Magnetic Resonance Imaging (MRI) and the rise of deep learning have enabled the development of powerful predictive models for a wide range of diagnostic tasks in MRI, such as image classification or object segmentation. However, training models for specific new tasks often requires large amounts of labeled data, which is difficult to obtain due to high annotation costs and data privacy concerns. To circumvent this issue, we introduce MRI-CORE (MRI COmprehensive Representation Encoder), a vision foundation model pre-trained using more than 6 million slices from over 110,000 MRI volumes across 18 main body locations. Experiments on five diverse object segmentation tasks in MRI demonstrate that MRI-CORE can significantly improve segmentation performance in realistic scenarios with limited labeled data availability, achieving an average gain of 6.97% 3D Dice Coefficient using only 10 annotated slices per task. We further demonstrate new model capabilities in MRI such as classification of image properties including body location, sequence type and institution, and zero-shot segmentation. These results highlight the value of MRI-CORE as a generalist vision foundation model for MRI, potentially lowering the data annotation resource barriers for many applications.
Problem

Research questions and friction points this paper is trying to address.

Reducing labeled data needs for MRI deep learning tasks
Improving MRI segmentation with limited annotated data
Enabling multi-task MRI analysis via foundation model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-trained foundation model for MRI
Uses 6M slices from 110K volumes
Improves segmentation with limited data
🔎 Similar Papers
No similar papers found.