Localizing Before Answering: A Benchmark for Grounded Medical Visual Question Answering

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical large multimodal models (LMMs) suffer from hallucination in visual question answering (VQA) due to poor lesion localization—often overlooking pathological regions and relying instead on spurious visual cues or linguistic priors. To address this, we propose Locality-oriented before Answering (LobA), a localization-driven “localize-then-answer” framework. We introduce HEAL-MedVQA, the first medical VQA benchmark featuring 67K physician-annotated lesion segmentation masks, and establish a joint localization-and-answer optimization paradigm. Our method innovates with a dual-path shortcut learning evaluation protocol, mask-based localization supervision, multimodal attention guidance, and self-prompting enhancement. Experiments show that LobA achieves a 23.6% improvement in lesion localization accuracy and a 41.2% reduction in hallucination rate on HEAL-MedVQA, while significantly enhancing clinical credibility of answers. This work provides a novel methodology and evaluation standard for interpretable and trustworthy medical LMMs.

Technology Category

Application Category

📝 Abstract
Medical Large Multi-modal Models (LMMs) have demonstrated remarkable capabilities in medical data interpretation. However, these models frequently generate hallucinations contradicting source evidence, particularly due to inadequate localization reasoning. This work reveals a critical limitation in current medical LMMs: instead of analyzing relevant pathological regions, they often rely on linguistic patterns or attend to irrelevant image areas when responding to disease-related queries. To address this, we introduce HEAL-MedVQA (Hallucination Evaluation via Localization MedVQA), a comprehensive benchmark designed to evaluate LMMs' localization abilities and hallucination robustness. HEAL-MedVQA features (i) two innovative evaluation protocols to assess visual and textual shortcut learning, and (ii) a dataset of 67K VQA pairs, with doctor-annotated anatomical segmentation masks for pathological regions. To improve visual reasoning, we propose the Localize-before-Answer (LobA) framework, which trains LMMs to localize target regions of interest and self-prompt to emphasize segmented pathological areas, generating grounded and reliable answers. Experimental results demonstrate that our approach significantly outperforms state-of-the-art biomedical LMMs on the challenging HEAL-MedVQA benchmark, advancing robustness in medical VQA.
Problem

Research questions and friction points this paper is trying to address.

Evaluating medical LMMs' localization and hallucination robustness
Addressing inadequate localization reasoning in medical VQA
Improving visual grounding via Localize-before-Answer framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces HEAL-MedVQA benchmark for LMM evaluation
Proposes Localize-before-Answer (LobA) framework
Uses doctor-annotated anatomical segmentation masks
D
Dung Nguyen
Hanoi University of Science and Technology
M
Minh Khoi Ho
Hanoi University of Science and Technology
H
Huy Ta
Australian Institute for Machine Learning, The University of Adelaide
Thanh Tam Nguyen
Thanh Tam Nguyen
Lecturer, Griffith University
Social Network MiningStream ProcessingBig DataPrivacy-Preserving MLRecommender Systems
Q
Qi Chen
Australian Institute for Machine Learning, The University of Adelaide
K
Kumar Rav
College of Medicine and Public Health, Flinders University
Q
Quy Duong Dang
Australian Institute for Machine Learning, The University of Adelaide
S
Satwik Ramchandre
Australian Institute for Machine Learning, The University of Adelaide
S
S. L. Phung
University of Wollongong
Zhibin Liao
Zhibin Liao
School of Computer and Mathematical Sciences, University of Adelaide
Deep LearningMachine LearningMedical Image Analysis
Minh-Son To
Minh-Son To
Flinders Health and Medical Research Institute, Flinders University
Machine LearningComputer VisionMedical ImagingNeuroscienceBiostatistics
J
Johan W. Verjans
Australian Institute for Machine Learning, The University of Adelaide
P
Phi Le Nguyen
Hanoi University of Science and Technology
Vu Minh Hieu Phan
Vu Minh Hieu Phan
Australian Institute for Machine Learning, University of Adelaide
GenAIMulti-modal LearningTransformerComputer VisionKnowledge Distillation