Decipher-MR: A Vision-Language Foundation Model for 3D MRI Representations

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks hindering the clinical deployment of foundation models—namely, high heterogeneity, severe annotation scarcity, and poor generalizability in MRI data—this work introduces the first vision-language foundation model specifically designed for 3D MRI. Methodologically, it adopts a modular architecture: a frozen, pre-trained 3D CNN-Transformer encoder is coupled with lightweight task-specific decoders; self-supervised visual learning (via masked autoencoding and contrastive learning) is synergistically integrated with radiology report–guided textual supervision to achieve robust cross-modal alignment. The model is jointly trained on large-scale MRI image–report pairs, enabling unified representation across anatomical regions, acquisition sequences, and pathological conditions. Experiments demonstrate substantial improvements over existing medical foundation and task-specific models in disease classification, demographic prediction, anatomical localization, and cross-modal retrieval—achieving both strong generalization and clinically interpretable outputs.

Technology Category

Application Category

📝 Abstract
Magnetic Resonance Imaging (MRI) is a critical medical imaging modality in clinical diagnosis and research, yet its complexity and heterogeneity pose challenges for automated analysis, particularly in scalable and generalizable machine learning applications. While foundation models have revolutionized natural language and vision tasks, their application to MRI remains limited due to data scarcity and narrow anatomical focus. In this work, we present Decipher-MR, a 3D MRI-specific vision-language foundation model trained on a large-scale dataset comprising 200,000 MRI series from over 22,000 studies spanning diverse anatomical regions, sequences, and pathologies. Decipher-MR integrates self-supervised vision learning with report-guided text supervision to build robust, generalizable representations, enabling effective adaptation across broad applications. To enable robust and diverse clinical tasks with minimal computational overhead, Decipher-MR supports a modular design that enables tuning of lightweight, task-specific decoders attached to a frozen pretrained encoder. Following this setting, we evaluate Decipher-MR across diverse benchmarks including disease classification, demographic prediction, anatomical localization, and cross-modal retrieval, demonstrating consistent performance gains over existing foundation models and task-specific approaches. Our results establish Decipher-MR as a scalable and versatile foundation for MRI-based AI, facilitating efficient development across clinical and research domains.
Problem

Research questions and friction points this paper is trying to address.

Addressing MRI complexity and heterogeneity for automated analysis
Overcoming data scarcity and narrow anatomical focus in MRI AI
Enabling scalable machine learning applications across diverse clinical tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale 3D MRI vision-language foundation model
Self-supervised vision learning with report-guided text supervision
Modular design with frozen encoder and lightweight task-specific decoders
🔎 Similar Papers
No similar papers found.