🤖 AI Summary
This work addresses the limited diagnostic accuracy of general-purpose large vision-language models in dermatology, which stems from their diffuse attention mechanisms struggling to distinguish subtle lesions from background noise. To overcome this, the study formulates skin disease diagnosis as an optimization problem of visual information transmission efficiency and proposes a Dynamic Visual Encoder with virtual width expansion (DVE) that effectively unfolds complex pathological manifolds without increasing parameter count. A two-stage reinforcement learning mechanism is further introduced to progressively align explicit medical descriptions with implicit diagnostic textures. Evaluated under a clinically safety-oriented protocol on Fitzpatrick17k, the proposed 7B model achieves a 12.06% improvement in Top-1 accuracy and a 28.57% gain in Top-6 accuracy, outperforming much larger models such as Qwen3VL-235B and GPT-5.2.
📝 Abstract
General-purpose Large Vision-Language Models (LVLMs), despite their massive scale, often falter in dermatology due to"diffuse attention"- the inability to disentangle subtle pathological lesions from background noise. In this paper, we challenge the assumption that parameter scaling is the only path to medical precision. We introduce SkinFlow, a framework that treats diagnosis as an optimization of visual information transmission efficiency. Our approach utilizes a Virtual-Width Dynamic Vision Encoder (DVE) to"unfold"complex pathological manifolds without physical parameter expansion, coupled with a two-stage Reinforcement Learning strategy. This strategy sequentially aligns explicit medical descriptions (Stage I) and reconstructs implicit diagnostic textures (Stage II) within a constrained semantic space. Furthermore, we propose a clinically grounded evaluation protocol that prioritizes diagnostic safety and hierarchical relevance over rigid label matching. Empirical results are compelling: our 7B model establishes a new state-of-the-art on the Fitzpatrick17k benchmark, achieving a +12.06% gain in Top-1 accuracy and a +28.57% boost in Top-6 accuracy over the massive general-purpose models (e.g., Qwen3VL-235B and GPT-5.2). These findings demonstrate that optimizing geometric capacity and information flow yields superior diagnostic reasoning compared to raw parameter scaling.