🤖 AI Summary
This work addresses the challenge of reliably detecting hallucinations in large vision-language models (LVLMs), a problem exacerbated by existing methods that rely solely on forward attention and overlook the token influence propagation revealed by gradient signals. To this end, we propose LVLMs-Saliency, a novel framework that introduces gradient-aware saliency for hallucination detection—quantifying the visual grounding strength of output tokens by fusing attention weights with input gradients. This approach uncovers a causal relationship between low saliency and hallucinatory behavior. Building on this insight, we design two training-free, plug-and-play inference refinement mechanisms: Saliency-Guided Rejection Sampling (SGRS) for dynamic token sampling and Local Coherence Reinforcement (LocoRE) for enhancing local consistency. Experiments demonstrate that our method significantly reduces hallucination rates across multiple LVLMs while preserving linguistic fluency and task performance, offering an interpretable and robust solution for reliability enhancement.
📝 Abstract
Recent studies have examined attention dynamics in large vision-language models (LVLMs) to detect hallucinations. However, existing approaches remain limited in reliably distinguishing hallucinated from factually grounded outputs, as they rely solely on forward-pass attention patterns and neglect gradient-based signals that reveal how token influence propagates through the network. To bridge this gap, we introduce LVLMs-Saliency, a gradient-aware diagnostic framework that quantifies the visual grounding strength of each output token by fusing attention weights with their input gradients. Our analysis uncovers a decisive pattern: hallucinations frequently arise when preceding output tokens exhibit low saliency toward the prediction of the next token, signaling a breakdown in contextual memory retention. Leveraging this insight, we propose a dual-mechanism inference-time framework to mitigate hallucinations: (1) Saliency-Guided Rejection Sampling (SGRS), which dynamically filters candidate tokens during autoregressive decoding by rejecting those whose saliency falls below a context-adaptive threshold, thereby preventing coherence-breaking tokens from entering the output sequence; and (2) Local Coherence Reinforcement (LocoRE), a lightweight, plug-and-play module that strengthens attention from the current token to its most recent predecessors, actively counteracting the contextual forgetting behavior identified by LVLMs-Saliency. Extensive experiments across multiple LVLMs demonstrate that our method significantly reduces hallucination rates while preserving fluency and task performance, offering a robust and interpretable solution for enhancing model reliability. Code is available at: https://github.com/zhangbaijin/LVLMs-Saliency