🤖 AI Summary
This work addresses the challenge that medical foundation models often rely on task-specific fine-tuning and struggle to generalize in zero-shot clinical settings. To overcome this, we propose DermFM-Zero—the first vision-language foundation model capable of zero-shot dermatological diagnosis and multimodal retrieval without any fine-tuning. By integrating masked latent modeling, contrastive learning, and sparse autoencoders, the model is trained on over 4 million multimodal dermatological samples, yielding latent representations that automatically uncover interpretable clinical concepts while suppressing artifact-related biases. DermFM-Zero achieves state-of-the-art performance across 20 benchmarks. In a multinational study involving more than 1,100 physicians, AI-assisted general practitioners nearly doubled their diagnostic accuracy and outperformed board-certified dermatologists in skin cancer assessment.
📝 Abstract
Medical foundation models have shown promise in controlled benchmarks, yet widespread deployment remains hindered by reliance on task-specific fine-tuning. Here, we introduce DermFM-Zero, a dermatology vision-language foundation model trained via masked latent modelling and contrastive learning on over 4 million multimodal data points. We evaluated DermFM-Zero across 20 benchmarks spanning zero-shot diagnosis and multimodal retrieval, achieving state-of-the-art performance without task-specific adaptation. We further evaluated its zero-shot capabilities in three multinational reader studies involving over 1,100 clinicians. In primary care settings, AI assistance enabled general practitioners to nearly double their differential diagnostic accuracy across 98 skin conditions. In specialist settings, the model significantly outperformed board-certified dermatologists in multimodal skin cancer assessment. In collaborative workflows, AI assistance enabled non-experts to surpass unassisted experts while improving management appropriateness. Finally, we show that DermFM-Zero's latent representations are interpretable: sparse autoencoders unsupervisedly disentangle clinically meaningful concepts that outperform predefined-vocabulary approaches and enable targeted suppression of artifact-induced biases, enhancing robustness without retraining. These findings demonstrate that a foundation model can provide effective, safe, and transparent zero-shot clinical decision support.