Reducing Hallucinations of Medical Multimodal Large Language Models with Visual Retrieval-Augmented Generation

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical multimodal large language models (MLLMs) frequently generate clinically implausible hallucinations in radiology report generation—particularly when describing rare medical entities without supporting visual evidence. To address this, we propose Visual Retrieval-Augmented Generation (V-RAG), the first RAG framework adapted to multimodal clinical settings. V-RAG enables joint vision–text retrieval via cross-modal alignment, thereby strengthening traceability between medical entities and their corresponding image regions. We further introduce *entity probing*, a novel metric that quantifies grounding accuracy by measuring how well generated entities are supported by visual features. Evaluated on MIMIC-CXR and Multicare, V-RAG achieves a +3.2 improvement in RadGraph-F1, with especially pronounced gains for rare entities—significantly increasing their image support rate and suppressing clinically inaccurate hallucinations. This work establishes a new paradigm for trustworthy, evidence-grounded medical MLLMs.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have shown impressive performance in vision and text tasks. However, hallucination remains a major challenge, especially in fields like healthcare where details are critical. In this work, we show how MLLMs may be enhanced to support Visual RAG (V-RAG), a retrieval-augmented generation framework that incorporates both text and visual data from retrieved images. On the MIMIC-CXR chest X-ray report generation and Multicare medical image caption generation datasets, we show that Visual RAG improves the accuracy of entity probing, which asks whether a medical entities is grounded by an image. We show that the improvements extend both to frequent and rare entities, the latter of which may have less positive training data. Downstream, we apply V-RAG with entity probing to correct hallucinations and generate more clinically accurate X-ray reports, obtaining a higher RadGraph-F1 score.
Problem

Research questions and friction points this paper is trying to address.

Reduce hallucinations in medical MLLMs
Enhance MLLMs with Visual RAG
Improve clinical accuracy of X-ray reports
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhances MLLMs with Visual RAG
Improves entity probing accuracy
Corrects hallucinations in medical reports
🔎 Similar Papers
No similar papers found.