Assessing Multimodal Chronic Wound Embeddings with Expert Triplet Agreement

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge that existing foundation models struggle to reliably capture clinically critical features of recessive dystrophic epidermolysis bullosa (RDEB), a rare inherited skin disorder, and lack structured methods for evaluating expert consensus. To overcome this, the authors propose TriDerm, a multimodal framework that integrates wound images, boundary masks, and expert clinical reports. The approach leverages expert triplets as implicit supervision for clinical similarity and introduces soft ordinal embeddings together with wound-level attention pooling to enable interpretable phenotypic representation learning under few-shot conditions. By combining vision foundation model fine-tuning, non-contrastive representation learning, and large language model prompting, TriDerm achieves 73.5% expert agreement—surpassing the best single-modality baseline by over 5.6 percentage points.
📝 Abstract
Recessive dystrophic epidermolysis bullosa (RDEB) is a rare genetic skin disorder for which clinicians greatly benefit from finding similar cases using images and clinical text. However, off-the-shelf foundation models do not reliably capture clinically meaningful features for this heterogeneous, long-tail disease, and structured measurement of agreement with experts is challenging. To address these gaps, we propose evaluating embedding spaces with expert ordinal comparisons (triplet judgments), which are fast to collect and encode implicit clinical similarity knowledge. We further introduce TriDerm, a multimodal framework that learns interpretable wound representations from small cohorts by integrating wound imagery, boundary masks, and expert reports. On the vision side, TriDerm adapts visual foundation models to RDEB using wound-level attention pooling and non-contrastive representation learning. For text, we prompt large language models with comparison queries and recover medically meaningful representations via soft ordinal embeddings (SOE). We show that visual and textual modalities capture complementary aspects of wound phenotype, and that fusing both modalities yields 73.5% agreement with experts, outperforming the best off-the-shelf single-modality foundation model by over 5.6 percentage points. We make the expert annotation tool, model code and representative dataset samples publicly available.
Problem

Research questions and friction points this paper is trying to address.

chronic wound
multimodal embedding
expert agreement
RDEB
clinical similarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal embedding
triplet agreement
non-contrastive learning
soft ordinal embeddings
wound representation
🔎 Similar Papers
No similar papers found.