π€ AI Summary
This work addresses the prevalent issue of hallucination in large vision-language models (LVLMs), which often generate content inconsistent with input images during open-ended generation. The study uncovers a novel phenomenon termed βcommitment depth gapβ: tokens corresponding to factual content converge earlier in the decoding process than hallucinated ones. Leveraging this insight, the authors propose Contextual Embedding Injection (CEI), a training-free method that dynamically injects the contextual embedding from the end of the input sequence as a visual anchor during decoding to suppress hallucinations. Extensive experiments demonstrate that CEI significantly improves generation faithfulness across CHAI, AMBER, and MMHal-Bench benchmarks. It consistently outperforms existing approaches in three prominent LVLMs, with its dynamic variant achieving the lowest overall hallucination rate.
π Abstract
Hallucinations, generating responses inconsistent with the visual input, remain a critical limitation of large vision-language models (LVLMs), especially in open-ended tasks such as image captioning and visual reasoning. In this work, we probe the layer-wise generation dynamics that drive hallucinations and propose a training-free mitigation strategy. Employing the Logit Lens, we examine how LVLMs construct next-token distributions across decoder layers, uncovering a pronounced commitment-depth gap: truthful tokens accumulate probability mass on their final candidates earlier than hallucinatory ones. Drawing on this discovery, we introduce Context Embedding Injection (CEI), a lightweight method that harnesses the hidden state of the last input token-the context embedding-as a grounding signal to maintain visual fidelity throughout decoding and curb hallucinations. Evaluated on the CHAIR, AMBER, and MMHal-Bench benchmarks (with a maximum token length of 512), CEI outperforms state-of-the-art baselines across three LVLMs, with its dynamic variant yielding the lowest overall hallucination rates. By integrating novel mechanistic insights with a scalable intervention, this work advances the mitigation of hallucinations in LVLMs.