🤖 AI Summary
Existing Med-VQA models often neglect salient image regions, leading to suboptimal semantic understanding and compromised clinical reasoning. Method: We propose a physician-informed, region-aware enhancement framework that maps sparse bounding-box annotations into the CLIP image-embedding space to guide LLaVA’s attention toward clinically relevant visual regions. Crucially, we inject lightweight, structured region priors directly into the image encoder and introduce the first medical multiple-choice visual understanding benchmark. Our approach integrates region-aware image encoding, CLIP-based vision–language alignment, and multi-task joint fine-tuning. Contribution/Results: The method achieves state-of-the-art performance across four major Med-VQA benchmarks. Evaluation on our novel benchmark demonstrates significant improvements in vision–language alignment fidelity and clinical reasoning accuracy, validating the efficacy of incorporating domain-specific spatial priors into multimodal medical foundation models.
📝 Abstract
Artificial intelligence has made significant strides in medical visual question answering (Med-VQA), yet prevalent studies often interpret images holistically, overlooking the visual regions of interest that may contain crucial information, potentially aligning with a doctor's prior knowledge that can be incorporated with minimal annotations (e.g., bounding boxes). To address this gap, this paper introduces R-LLaVA, designed to enhance biomedical VQA understanding by integrating simple medical annotations as prior knowledge directly into the image space through CLIP. These annotated visual regions of interest are then fed into the LLaVA model during training, aiming to enrich the model's understanding of biomedical queries. Experimental evaluation on four standard Med-VQA datasets demonstrates R-LLaVA's superiority over existing state-of-the-art (SoTA) methods. Additionally, to verify the model's capability in visual comprehension, a novel multiple-choice medical visual understanding dataset is introduced, confirming the positive impact of focusing on visual regions of interest in advancing biomedical VQA understanding.