🤖 AI Summary
Medical imaging diagnosis faces challenges including subtle pathological manifestations, high inter-subject anatomical variability, and strong structural similarity across regions; existing medical vision-language models lack comparative reasoning capabilities, while general-purpose models lack clinical discriminative knowledge. To address this, we propose a clinically inspired pairwise reference-image comparative reasoning framework: query images are paired with either normal reference images or prior longitudinal scans, and a diagnostic-logic-aligned comparative prompting strategy is designed to enable multi-image joint inference via supervised fine-tuning. This work introduces, for the first time, an effective structured multi-image comparison mechanism into medical vision-language modeling. Evaluated on multiple medical visual question answering benchmarks, our method significantly outperforms single-image baselines (average +5.2% accuracy), empirically validating that reference-guided comparative analysis enhances fine-grained clinical discrimination.
📝 Abstract
Medical imaging diagnosis presents inherent challenges due to diseases that mimic normal anatomy and exhibit significant inter-patient variability. Clinicians routinely employ comparative reasoning-using reference images from healthy controls or previous patient examinations-to discern subtle yet diagnostically critical abnormalities. However, existing medical vision-language models (VLMs) focus primarily on single-image or single-series analyses and lack explicit mechanisms for comparative reasoning. Conversely, general-purpose VLMs demonstrate strong multi-image comparative reasoning capabilities but lack essential medical-domain knowledge to identify nuanced clinical differences. This work aims to bridge this gap by exploring clinically-inspired comparative analysis within VLMs, leveraging reference images to enhance diagnostic accuracy. Through extensive empirical analysis, we show that providing general-purpose VLMs with query and normative matched reference images, accompanied by clinically-informed comparative prompts, significantly improves diagnostic outcomes compared to single-image baselines, especially after supervised finetuning (SFT). Our contributions highlight the clinical relevance of comparative analysis introduce novel strategies for leveraging reference images in VLMs, empirically demonstrate enhanced performance across multiple medical visual question answering (VQA) tasks, and provide theoretical insights into the efficacy of comparative image analysis in medical diagnosis.