🤖 AI Summary
Medical foundation models for imaging are constrained by limited training data, and their scaling behavior remains poorly understood. This study systematically investigates the impact of dataset scale and pretraining paradigms on model performance across classification, segmentation, and radiology report generation—using a large-scale chest X-ray corpus of 3.5 million images. We conduct the first comparative analysis of CLIP and DINOv2 architectures in medical imaging to characterize their scaling properties; validate institution-specific continual pretraining; and propose a multi-source supervision strategy integrating structured labels and free-text reports. Leveraging MedImageInsight (MI2) and RAD-DINO encoders, we perform continual pretraining on single-center data under a unified evaluation protocol. Results show that as few as 30K domain-specific images suffice to surpass state-of-the-art open-source medical foundation models. MI2 achieves superior lesion classification accuracy, while RAD-DINO excels in catheter segmentation.
📝 Abstract
Foundation vision encoders such as CLIP and DINOv2, trained on web-scale data, exhibit strong transfer performance across tasks and datasets. However, medical imaging foundation models remain constrained by smaller datasets, limiting our understanding of how data scale and pretraining paradigms affect performance in this setting. In this work, we systematically study continual pretraining of two vision encoders, MedImageInsight (MI2) and RAD-DINO representing the two major encoder paradigms CLIP and DINOv2, on up to 3.5M chest x-rays from a single institution, holding compute and evaluation protocols constant. We evaluate on classification (radiology findings, lines and tubes), segmentation (lines and tubes), and radiology report generation. While prior work has primarily focused on tasks related to radiology findings, we include lines and tubes tasks to counterbalance this bias and evaluate a model's ability to extract features that preserve continuity along elongated structures. Our experiments show that MI2 scales more effectively for finding-related tasks, while RAD-DINO is stronger on tube-related tasks. Surprisingly, continually pretraining MI2 with both reports and structured labels using UniCL improves performance, underscoring the value of structured supervision at scale. We further show that for some tasks, as few as 30k in-domain samples are sufficient to surpass open-weights foundation models. These results highlight the utility of center-specific continual pretraining, enabling medical institutions to derive significant performance gains by utilizing in-domain data.