🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate hallucinated content—i.e., spurious objects or attributes not present in the input image—thereby compromising caption reliability. We observe that hallucinations often manifest as tokens whose generation exhibits low dependence on visual input, and thus propose an image-dependence signal as a principled hallucination indicator. To operationalize this insight, we introduce the first token-level hallucination classifier trained on parallel image-absent and image-augmented inference traces, enabling fine-grained hallucination detection without access to ground-truth annotations. Furthermore, we design a plug-and-play controllable decoding strategy that dynamically suppresses hallucinatory tokens during inference. Our approach requires no model fine-tuning or architectural modification, ensuring strong generalization across LVLMs and seamless deployment. Extensive evaluation across multiple benchmarks demonstrates significant hallucination reduction while preserving caption quality, fluency, and lexical diversity.
📝 Abstract
Large Vision-Language Models (LVLMs) integrate image encoders with Large Language Models (LLMs) to process multi-modal inputs and perform complex visual tasks. However, they often generate hallucinations by describing non-existent objects or attributes, compromising their reliability. This study analyzes hallucination patterns in image captioning, showing that not all tokens in the generation process are influenced by image input and that image dependency can serve as a useful signal for hallucination detection. To address this, we develop an automated pipeline to identify hallucinated objects and train a token-level classifier using hidden representations from parallel inference passes-with and without image input. Leveraging this classifier, we introduce a decoding strategy that effectively controls hallucination rates in image captioning at inference time.