Derm1M: A Million-scale Vision-Language Dataset Aligned with Clinical Ontology Knowledge for Dermatology

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dermatological AI development is hindered by small-scale, non-standardized, and clinically semantically impoverished image–text datasets. To address this, we introduce Derm1M—the first million-scale dermatological vision–language dataset—comprising 1.029 million image–text pairs covering 390+ dermatologic conditions and 130 clinical concepts. Crucially, we systematically embed standardized clinical ontologies into the multimodal data curation pipeline, enabling hierarchical clinical semantic modeling—including disease taxonomy, skin phototype, medical history, and symptom descriptors. Leveraging an expert-constructed dermatology ontology aligned with structured educational resources, we pretrain the CLIP-style DermLIP model family. On eight dermatology benchmarks, DermLIP achieves up to a 21.3% improvement in zero-shot classification accuracy and a 34.7% gain in cross-modal retrieval Recall@10, comprehensively outperforming existing foundation models.

Technology Category

Application Category

📝 Abstract
The emergence of vision-language models has transformed medical AI, enabling unprecedented advances in diagnostic capability and clinical applications. However, progress in dermatology has lagged behind other medical domains due to the lack of standard image-text pairs. Existing dermatological datasets are limited in both scale and depth, offering only single-label annotations across a narrow range of diseases instead of rich textual descriptions, and lacking the crucial clinical context needed for real-world applications. To address these limitations, we present Derm1M, the first large-scale vision-language dataset for dermatology, comprising 1,029,761 image-text pairs. Built from diverse educational resources and structured around a standard ontology collaboratively developed by experts, Derm1M provides comprehensive coverage for over 390 skin conditions across four hierarchical levels and 130 clinical concepts with rich contextual information such as medical history, symptoms, and skin tone. To demonstrate Derm1M potential in advancing both AI research and clinical application, we pretrained a series of CLIP-like models, collectively called DermLIP, on this dataset. The DermLIP family significantly outperforms state-of-the-art foundation models on eight diverse datasets across multiple tasks, including zero-shot skin disease classification, clinical and artifacts concept identification, few-shot/full-shot learning, and cross-modal retrieval. Our dataset and code will be public.
Problem

Research questions and friction points this paper is trying to address.

Lack of large-scale dermatology image-text datasets
Limited clinical context in existing datasets
Need for comprehensive skin condition coverage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale vision-language dataset for dermatology
Structured around expert-developed clinical ontology
Pretrained CLIP-like models for diverse dermatological tasks
🔎 Similar Papers
No similar papers found.