🤖 AI Summary
This work addresses three key challenges in skin cancer diagnosis using vision-language models: high computational cost, data scarcity, and poor interpretability. To this end, the authors propose an efficient and clinically trustworthy multimodal diagnostic framework that freezes the CLIP visual encoder and integrates a lightweight, quantized Qwen2.5-VL language model. The approach further incorporates low-rank adaptation (LoRA) and a novel consistency-aware focal alignment (CFA) loss to precisely align lesion regions with clinical semantics under long-tailed data distributions. Evaluated on the ISIC and Derm7pt benchmarks, the method achieves 4.3–6.2% higher accuracy than a 13B-parameter baseline while using 43% fewer parameters. Expert blind reviews and out-of-distribution testing confirm its superior interpretability and clinical credibility compared to conventional saliency map techniques.
📝 Abstract
The deployment of vision-language models (VLMs) in dermatology is hindered by the trilemma of high computational costs, extreme data scarcity, and the black-box nature of deep learning. To address these challenges, we present SkinCLIP-VL, a resource-efficient framework that adapts foundation models for trustworthy skin cancer diagnosis. Adopting a frozen perception, adaptive reasoning paradigm, we integrate a frozen CLIP encoder with a lightweight, quantized Qwen2.5-VL via low-rank adaptation (LoRA). To strictly align visual regions with clinical semantics under long-tailed distributions, we propose the Consistency-aware Focal Alignment (CFA) Loss. This objective synergizes focal re-weighting, semantic alignment, and calibration. On ISIC and Derm7pt benchmarks, SkinCLIP-VL surpasses 13B-parameter baselines by 4.3-6.2% in accuracy with 43% fewer parameters. Crucially, blinded expert evaluation and out-of-distribution testing confirm that our visually grounded rationales significantly enhance clinical trust compared to traditional saliency maps.