Data Scaling Laws for Radiology Foundation Models

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical foundation models for imaging are constrained by limited training data, and their scaling behavior remains poorly understood. This study systematically investigates the impact of dataset scale and pretraining paradigms on model performance across classification, segmentation, and radiology report generation—using a large-scale chest X-ray corpus of 3.5 million images. We conduct the first comparative analysis of CLIP and DINOv2 architectures in medical imaging to characterize their scaling properties; validate institution-specific continual pretraining; and propose a multi-source supervision strategy integrating structured labels and free-text reports. Leveraging MedImageInsight (MI2) and RAD-DINO encoders, we perform continual pretraining on single-center data under a unified evaluation protocol. Results show that as few as 30K domain-specific images suffice to surpass state-of-the-art open-source medical foundation models. MI2 achieves superior lesion classification accuracy, while RAD-DINO excels in catheter segmentation.

Technology Category

Application Category

📝 Abstract
Foundation vision encoders such as CLIP and DINOv2, trained on web-scale data, exhibit strong transfer performance across tasks and datasets. However, medical imaging foundation models remain constrained by smaller datasets, limiting our understanding of how data scale and pretraining paradigms affect performance in this setting. In this work, we systematically study continual pretraining of two vision encoders, MedImageInsight (MI2) and RAD-DINO representing the two major encoder paradigms CLIP and DINOv2, on up to 3.5M chest x-rays from a single institution, holding compute and evaluation protocols constant. We evaluate on classification (radiology findings, lines and tubes), segmentation (lines and tubes), and radiology report generation. While prior work has primarily focused on tasks related to radiology findings, we include lines and tubes tasks to counterbalance this bias and evaluate a model's ability to extract features that preserve continuity along elongated structures. Our experiments show that MI2 scales more effectively for finding-related tasks, while RAD-DINO is stronger on tube-related tasks. Surprisingly, continually pretraining MI2 with both reports and structured labels using UniCL improves performance, underscoring the value of structured supervision at scale. We further show that for some tasks, as few as 30k in-domain samples are sufficient to surpass open-weights foundation models. These results highlight the utility of center-specific continual pretraining, enabling medical institutions to derive significant performance gains by utilizing in-domain data.
Problem

Research questions and friction points this paper is trying to address.

Investigating data scaling effects on medical imaging foundation models
Comparing CLIP and DINOv2 paradigms for radiology-specific tasks
Evaluating continual pretraining with institutional chest x-ray data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual pretraining of vision encoders MedImageInsight and RAD-DINO
Utilizing up to 3.5M chest x-rays from single institution
Structured supervision with reports and labels improves performance
🔎 Similar Papers
No similar papers found.
Maximilian Ilse
Maximilian Ilse
Senior Researcher @ Microsoft Research
medical imagingdeep learningmachine learning
Harshita Sharma
Harshita Sharma
Senior Researcher at Microsoft
Computer visionMedical image analysisMachine learningBiomedical imagingMultimodal methods
A
Anton Schwaighofer
Microsoft Health Futures UK
Sam Bond-Taylor
Sam Bond-Taylor
Senior Researcher at Microsoft Research
Deep LearningGenerative ModelsMedical Imaging
Fernando Pérez-García
Fernando Pérez-García
Microsoft Research - Biomedical Imaging
medical image computingmachine learning
O
Olesya Melnichenko
Microsoft Health Futures US
A
Anne-Marie G. Sykes
Mayo Clinic
K
Kelly K. Horst
Radiology AI Lab, Mayo Clinic
A
Ashish Khandelwal
Mayo Clinic
M
Maxwell Reynolds
Mayo Clinic
M
Maria T. Wetscherek
Microsoft Health Futures UK, Cambridge University Hospitals
N
Noel C. F. Codella
Microsoft Health & Life Sciences
J
Javier Alvarez-Valle
Microsoft Health Futures UK
K
Korfiatis Panagiotis
Mayo Clinic
V
Valentina Salvatelli
Microsoft Health Futures UK