🤖 AI Summary
Medical large multimodal models (LMMs) suffer from hallucination in visual question answering (VQA) due to poor lesion localization—often overlooking pathological regions and relying instead on spurious visual cues or linguistic priors. To address this, we propose Locality-oriented before Answering (LobA), a localization-driven “localize-then-answer” framework. We introduce HEAL-MedVQA, the first medical VQA benchmark featuring 67K physician-annotated lesion segmentation masks, and establish a joint localization-and-answer optimization paradigm. Our method innovates with a dual-path shortcut learning evaluation protocol, mask-based localization supervision, multimodal attention guidance, and self-prompting enhancement. Experiments show that LobA achieves a 23.6% improvement in lesion localization accuracy and a 41.2% reduction in hallucination rate on HEAL-MedVQA, while significantly enhancing clinical credibility of answers. This work provides a novel methodology and evaluation standard for interpretable and trustworthy medical LMMs.
📝 Abstract
Medical Large Multi-modal Models (LMMs) have demonstrated remarkable capabilities in medical data interpretation. However, these models frequently generate hallucinations contradicting source evidence, particularly due to inadequate localization reasoning. This work reveals a critical limitation in current medical LMMs: instead of analyzing relevant pathological regions, they often rely on linguistic patterns or attend to irrelevant image areas when responding to disease-related queries. To address this, we introduce HEAL-MedVQA (Hallucination Evaluation via Localization MedVQA), a comprehensive benchmark designed to evaluate LMMs' localization abilities and hallucination robustness. HEAL-MedVQA features (i) two innovative evaluation protocols to assess visual and textual shortcut learning, and (ii) a dataset of 67K VQA pairs, with doctor-annotated anatomical segmentation masks for pathological regions. To improve visual reasoning, we propose the Localize-before-Answer (LobA) framework, which trains LMMs to localize target regions of interest and self-prompt to emphasize segmented pathological areas, generating grounded and reliable answers. Experimental results demonstrate that our approach significantly outperforms state-of-the-art biomedical LMMs on the challenging HEAL-MedVQA benchmark, advancing robustness in medical VQA.