🤖 AI Summary
Existing deep learning models for dermatological multimodal clinical tasks suffer from poor generalizability, heavy reliance on labeled data, and limited robustness across diverse populations. To address these challenges, we propose PanDerm—the first foundation model for comprehensive dermatological multimodal analysis. PanDerm is pretrained on 2 million real-world clinical images spanning four imaging modalities, leveraging self-supervised learning, cross-modal feature alignment, and medical representation learning to construct a unified architecture. It supports 28 downstream tasks, achieving state-of-the-art performance under few-shot settings (using only 5–10% labeled data). For early- and mid-stage melanoma detection, PanDerm surpasses board-certified dermatologists by 10.2% in accuracy; in human-AI collaborative diagnosis, it improves accuracy by 11%. Crucially, PanDerm demonstrates strong generalization across institutions, imaging modalities, and demographic subgroups—including varied skin tones, ages, genders, and anatomical sites.
📝 Abstract
Diagnosing and treating skin diseases require advanced visual skills across multiple domains and the ability to synthesize information from various imaging modalities. Current deep learning models, while effective at specific tasks such as diagnosing skin cancer from dermoscopic images, fall short in addressing the complex, multimodal demands of clinical practice. Here, we introduce PanDerm, a multimodal dermatology foundation model pretrained through self-supervised learning on a dataset of over 2 million real-world images of skin diseases, sourced from 11 clinical institutions across 4 imaging modalities. We evaluated PanDerm on 28 diverse datasets covering a range of clinical tasks, including skin cancer screening, phenotype assessment and risk stratification, diagnosis of neoplastic and inflammatory skin diseases, skin lesion segmentation, change monitoring, and metastasis prediction and prognosis. PanDerm achieved state-of-the-art performance across all evaluated tasks, often outperforming existing models even when using only 5-10% of labeled data. PanDerm's clinical utility was demonstrated through reader studies in real-world clinical settings across multiple imaging modalities. It outperformed clinicians by 10.2% in early-stage melanoma detection accuracy and enhanced clinicians' multiclass skin cancer diagnostic accuracy by 11% in a collaborative human-AI setting. Additionally, PanDerm demonstrated robust performance across diverse demographic factors, including different body locations, age groups, genders, and skin tones. The strong results in benchmark evaluations and real-world clinical scenarios suggest that PanDerm could enhance the management of skin diseases and serve as a model for developing multimodal foundation models in other medical specialties, potentially accelerating the integration of AI support in healthcare.