🤖 AI Summary
Existing retinal foundation models rely on manually annotated, curated datasets and require extensive task-specific fine-tuning, hindering deployment in resource-constrained clinical settings. This paper introduces ReVision—the first retinal foundation model trained exclusively on real-world, decade-long telemedicine data (486,000 fundus color photographs with corresponding clinical reports), eliminating the need for manual annotations and enabling zero-shot disease detection and cross-institutional, cross-modal generalization. We propose the “clinically native intelligence” paradigm, replacing curated datasets with authentic consultation data, and integrate contrastive vision-language alignment, zero-shot prompting, lightweight adapter-based fine-tuning, and cross-domain representation transfer. ReVision achieves a zero-shot AUROC of 0.946 across 12 public benchmarks and 0.952 on three independent clinical cohorts; it improves physician diagnostic accuracy by 14.8%; and matches full fine-tuning performance using only a minimal number of trainable parameters and scarce annotations.
📝 Abstract
Current retinal foundation models remain constrained by curated research datasets that lack authentic clinical context, and require extensive task-specific optimization for each application, limiting their deployment efficiency in low-resource settings. Here, we show that these barriers can be overcome by building clinical native intelligence directly from real-world medical practice. Our key insight is that large-scale telemedicine programs, where expert centers provide remote consultations across distributed facilities, represent a natural reservoir for learning clinical image interpretation. We present ReVision, a retinal foundation model that learns from the natural alignment between 485,980 color fundus photographs and their corresponding diagnostic reports, accumulated through a decade-long telemedicine program spanning 162 medical institutions across China. Through extensive evaluation across 27 ophthalmic benchmarks, we demonstrate that ReVison enables deployment efficiency with minimal local resources. Without any task-specific training, ReVision achieves zero-shot disease detection with an average AUROC of 0.946 across 12 public benchmarks and 0.952 on 3 independent clinical cohorts. When minimal adaptation is feasible, ReVision matches extensively fine-tuned alternatives while requiring orders of magnitude fewer trainable parameters and labeled examples. The learned representations also transfer effectively to new clinical sites, imaging domains, imaging modalities, and systemic health prediction tasks. In a prospective reader study with 33 ophthalmologists, ReVision's zero-shot assistance improved diagnostic accuracy by 14.8% across all experience levels. These results demonstrate that clinical native intelligence can be directly extracted from clinical archives without any further annotation to build medical AI systems suited to various low-resource settings.