Deterministic Hallucination Detection in Medical VQA via Confidence-Evidence Bayesian Gain

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of hallucinated responses in multimodal large language models (MLLMs) for medical visual question answering (VQA), where generated answers often contradict visual evidence. Existing hallucination detection methods suffer from high computational overhead and reliance on external models. To overcome these limitations, the authors propose CEBaG—a deterministic, end-to-end hallucination detection framework that requires no sampling, external dependencies, or task-specific hyperparameters. CEBaG uniquely leverages internal model log-probabilities, integrating token-level prediction variance with visual evidence magnitude to formulate a Confidence-Evidence Bayesian Gain metric. Evaluated across four medical MLLMs and three VQA benchmarks under 16 distinct settings, CEBaG achieves the highest AUC in 13 cases, outperforming the prior state-of-the-art method VASE by an average of 8 AUC points.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have shown strong potential for medical Visual Question Answering (VQA), yet they remain prone to hallucinations, defined as generating responses that contradict the input image, posing serious risks in clinical settings. Current hallucination detection methods, such as Semantic Entropy (SE) and Vision-Amplified Semantic Entropy (VASE), require 10 to 20 stochastic generations per sample together with an external natural language inference model for semantic clustering, making them computationally expensive and difficult to deploy in practice. We observe that hallucinated responses exhibit a distinctive signature directly in the model's own log-probabilities: inconsistent token-level confidence and weak sensitivity to visual evidence. Based on this observation, we propose Confidence-Evidence Bayesian Gain (CEBaG), a deterministic hallucination detection method that requires no stochastic sampling, no external models, and no task-specific hyperparameters. CEBaG combines two complementary signals: token-level predictive variance, which captures inconsistent confidence across response tokens, and evidence magnitude, which measures how much the image shifts per-token predictions relative to text-only inference. Evaluated across four medical MLLMs and three VQA benchmarks (16 experimental settings), CEBaG achieves the highest AUC in 13 of 16 settings and improves over VASE by 8 AUC points on average, while being fully deterministic and self-contained. The code will be made available upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

hallucination detection
medical VQA
multimodal large language models
deterministic detection
clinical safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

hallucination detection
medical VQA
deterministic method
multimodal LLMs
Bayesian gain
🔎 Similar Papers
No similar papers found.
M
Mohammad Asadi
Department of Electrical Engineering, Stanford University, CA, USA; Department of Biology, Stanford University, CA, USA
T
Tahoura Nedaee
Division of Cardiology, Department of Medicine, Stanford University, CA, USA
J
Jack W. O'Sullivan
Department of Biomedical Data Science, Stanford University, CA, USA; Department of Computer Science, Stanford University, CA, USA
E
Euan Ashley
Department of Biomedical Data Science, Stanford University, CA, USA; Department of Computer Science, Stanford University, CA, USA
Ehsan Adeli
Ehsan Adeli
Stanford University
Computer VisionComputational NeurosciencePrecision HealthcareAmbient Intelligence