First Logit Boosting: Visual Grounding Method to Mitigate Object Hallucination in Large Vision-Language Models

๐Ÿ“… 2026-04-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the prevalent issue of object hallucination in large vision-language models during text generation and the inability of existing training-free methods to maintain long-term visual alignment. To this end, the authors propose a lightweight, training-free visual alignment approach that caches the logits of the first generated token and continuously injects this initial visual signal into subsequent token predictions through weighted fusion. Inspired by contrastive decoding, the method effectively mitigates visual information decay without increasing model complexity or data requirements. Extensive experiments demonstrate that it significantly reduces object hallucination rates across multiple tasks, benchmarks, and backbone architectures, while incurring negligible inference overheadโ€”making it well-suited for real-time multimodal systems.
๐Ÿ“ Abstract
Recent Large Vision-Language Models (LVLMs) have demonstrated remarkable performance across various multimodal tasks that require understanding both visual and linguistic inputs. However, object hallucination -- the generation of nonexistent objects in answers -- remains a persistent challenge. Although several approaches such as retraining and external grounding methods have been proposed to mitigate this issue, they still suffer from high data costs or structural complexity. Training-free methods such as Contrastive Decoding (CD) are more cost-effective, avoiding additional training or external models, but still suffer from long-term decay, where visual grounding weakens and language priors dominate as the generation progresses. In this paper, we propose First Logit Boosting (FLB), a simple yet effective training-free technique designed to alleviate long-term decay in LVLMs. FLB stores the logit of the first generated token and adds it to subsequent token predictions, effectively mitigating long-term decay of visual information. We observe that FLB (1) sustains the visual information embedded in the first token throughout generation, and (2) suppresses hallucinated words through the stabilizing effect of the ``The'' token. Experimental results show that FLB significantly reduces object hallucination across various tasks, benchmarks, and backbone models. Notably, it causes negligible inference overhead, making it highly applicable to real-time multimodal systems. Code is available at https://github.com/jiwooha20/FLB
Problem

Research questions and friction points this paper is trying to address.

object hallucination
visual grounding
large vision-language models
long-term decay
multimodal generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

First Logit Boosting
object hallucination
visual grounding
training-free
large vision-language models
๐Ÿ”Ž Similar Papers
No similar papers found.