RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision

📅 2024-01-19
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
Medical imaging datasets often lack high-quality paired textual annotations (e.g., radiology reports), and such texts suffer from subjectivity and privacy constraints, limiting the generalizability of language-supervised models. To address this, we propose RAD-DINO—the first general-purpose biomedical image encoder trained entirely via unpaired, self-supervised visual pretraining. Built upon the DINO framework, it employs multi-view contrastive learning and teacher-student distillation, requiring no textual supervision. Our experiments demonstrate that: (1) purely visual self-supervision surpasses state-of-the-art language-supervised models for the first time; (2) incorporating language supervision degrades representation fidelity for critical clinical variables (e.g., age, sex); and (3) performance scales robustly with both data volume and modality diversity. RAD-DINO matches or exceeds SOTA on classification, segmentation, and image–text alignment tasks, while exhibiting superior correlation with clinically relevant attributes. The model is publicly released.

Technology Category

Application Category

📝 Abstract
Language-supervised pre-training has proven to be a valuable method for extracting semantically meaningful features from images, serving as a foundational element in multimodal systems within the computer vision and medical imaging domains. However, the computed features are limited by the information contained in the text, which is particularly problematic in medical imaging, where the findings described by radiologists focus on specific observations. This challenge is compounded by the scarcity of paired imaging-text data due to concerns over leakage of personal health information. In this work, we fundamentally challenge the prevailing reliance on language supervision for learning general-purpose biomedical imaging encoders. We introduce RAD-DINO, a biomedical image encoder pre-trained solely on unimodal biomedical imaging data that obtains similar or greater performance than state-of-the-art biomedical language-supervised models on a diverse range of benchmarks. Specifically, the quality of learned representations is evaluated on standard imaging tasks (classification and semantic segmentation), and a vision-language alignment task (text report generation from images). To further demonstrate the drawback of language supervision, we show that features from RAD-DINO correlate with other medical records (e.g., sex or age) better than language-supervised models, which are generally not mentioned in radiology reports. Finally, we conduct a series of ablations determining the factors in RAD-DINO's performance; notably, we observe that RAD-DINO's downstream performance scales well with the quantity and diversity of training data, demonstrating that image-only supervision is a scalable approach for training a foundational biomedical image encoder. Model weights of RAD-DINO trained on publicly available datasets are available at https://huggingface.co/microsoft/rad-dino.
Problem

Research questions and friction points this paper is trying to address.

Medical Image Understanding
Text-independent Image Encoding
Data Scarcity in Medical Imaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

RAD-DINO
Unsupervised Learning
Medical Imaging
🔎 Similar Papers
No similar papers found.
Fernando Pérez-García
Fernando Pérez-García
Microsoft Research - Biomedical Imaging
medical image computingmachine learning
Harshita Sharma
Harshita Sharma
Senior Researcher at Microsoft
Computer visionMedical image analysisMachine learningBiomedical imagingMultimodal methods
Sam Bond-Taylor
Sam Bond-Taylor
Senior Researcher at Microsoft Research
Deep LearningGenerative ModelsMedical Imaging
Kenza Bouzid
Kenza Bouzid
Microsoft Research
Machine LearningComputer Vision
V
Valentina Salvatelli
Health Futures, Microsoft Research
Maximilian Ilse
Maximilian Ilse
Senior Researcher @ Microsoft Research
medical imagingdeep learningmachine learning
Shruthi Bannur
Shruthi Bannur
Microsoft Research
Machine LearningDeep LearningComputer VisionNatural Language Processing
D
Daniel C. Castro
Health Futures, Microsoft Research
A
Anton Schwaighofer
Health Futures, Microsoft Research
M
Matthew P. Lungren
Microsoft Health and Life Sciences
M
Maria Wetscherek
Health Futures, Microsoft Research, Department of Radiology, University of Cambridge and Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
Noel Codella
Noel Codella
Principal Researcher @ Microsoft
Artificial IntelligenceMachine LearningComputer Vision
S
Stephanie L. Hyland
Health Futures, Microsoft Research
J
Javier Alvarez-Valle
Health Futures, Microsoft Research
Ozan Oktay
Ozan Oktay
Health Futures, Microsoft Research