🤖 AI Summary
Dermatological vision-language models (VLMs) exhibit weak structured reasoning capabilities under low-resource settings, where annotated dermatological data are scarce and advanced training methods incur prohibitive computational costs. Method: We propose GRPO++, a novel framework that extends Group Relative Policy Optimization (GRPO) with a knowledge-graph-driven preference alignment mechanism, synergistically integrating supervised fine-tuning (SFT) and direct preference optimization (DPO) into a multi-stage, dermatology-informed training pipeline. Contribution/Results: GRPO++ significantly reduces dependency on both labeled data and computational resources. Evaluated on a curated dermatological dataset, it substantially outperforms standard fine-tuning in disease classification accuracy and medical dialogue generation quality. The resulting end-to-end system is scalable, interpretable, and resource-efficient—advancing practical deployment of VLMs in clinical dermatology.
📝 Abstract
Vision-Language Models (VLMs) show promise in medical image analysis, yet their capacity for structured reasoning in complex domains like dermatology is often limited by data scarcity and the high computational cost of advanced training techniques. To address these challenges, we introduce DermIQ-VLM, a VLM developed through a multi-stage, resource-efficient methodology designed to emulate a dermatologist's diagnostic process. Our primary contribution is a modified version of Grouped Relative Policy Optimization (GRPO), called GRPO++, which stabilizes the powerful but data-intensive GRPO framework. Our proposed training pipeline first employs GRPO++ for reasoning-oriented disease recognition, followed by supervised fine-tuning for conversational ability. To mitigate factual errors introduced during this step, we then align the model using Direct Preference Optimization (DPO), leveraging a Knowledge Graph-based system as a scalable proxy for expert preference. A preliminary evaluation on a curated dermatological dataset demonstrates that our proposed methodology yields notable performance gains over standard fine-tuning approaches. These findings validate the potential of our pipeline as a feasible pathway for developing specialized, reliable VLMs in resource-constrained environments.