Image Tokens Matter: Mitigating Hallucination in Discrete Tokenizer-based Large Vision-Language Models via Latent Editing

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses object hallucination in discrete-image-tokenizer-based large vision-language models (LVLMs), attributing the issue to spurious visual priors encoded in image tokens. We identify high-frequency co-occurrence among image tokens as a key mechanism inducing hallucination. To this end, we first construct an image token co-occurrence graph and employ graph neural network (GNN)-based contrastive learning combined with spectral clustering to detect hallucination-prone token clusters. Subsequently, we propose a latent-space editing strategy that dynamically suppresses the implicit influence of semantically strong but visually absent tokens during autoregressive generation. Our method significantly reduces hallucination rates across multiple benchmarks—including POPE, HallusionBench, and MME—while preserving core visual understanding and language generation capabilities. The approach introduces no architectural modifications to the base LVLM and incurs negligible computational overhead. Code is publicly available.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) with discrete image tokenizers unify multimodal representations by encoding visual inputs into a finite set of tokens. Despite their effectiveness, we find that these models still hallucinate non-existent objects. We hypothesize that this may be due to visual priors induced during training: When certain image tokens frequently co-occur in the same spatial regions and represent shared objects, they become strongly associated with the verbalizations of those objects. As a result, the model may hallucinate by evoking visually absent tokens that often co-occur with present ones. To test this assumption, we construct a co-occurrence graph of image tokens using a segmentation dataset and employ a Graph Neural Network (GNN) with contrastive learning followed by a clustering method to group tokens that frequently co-occur in similar visual contexts. We find that hallucinations predominantly correspond to clusters whose tokens dominate the input, and more specifically, that the visually absent tokens in those clusters show much higher correlation with hallucinated objects compared to tokens present in the image. Based on this observation, we propose a hallucination mitigation method that suppresses the influence of visually absent tokens by modifying latent image embeddings during generation. Experiments show our method reduces hallucinations while preserving expressivity. Code is available at https://github.com/weixingW/CGC-VTD/tree/main
Problem

Research questions and friction points this paper is trying to address.

Mitigates object hallucination in vision-language models
Identifies co-occurring image tokens causing hallucinations
Edits latent embeddings to suppress absent token influence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses GNN to analyze token co-occurrence
Modifies latent embeddings to reduce hallucinations
Clusters tokens by visual context frequency
🔎 Similar Papers
No similar papers found.