One Token, Two Fates: A Unified Framework via Vision Token Manipulation Against MLLMs Hallucination

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the susceptibility of multimodal large language models (MLLMs) to linguistic priors, which often leads to image-irrelevant hallucinations—a challenge inadequately mitigated by existing training-free approaches that struggle to simultaneously enhance visual grounding and suppress textual bias. To this end, we propose a unified, training-free framework that synergistically leverages visual tokens for both positive enhancement and counterfactual causal calibration. Specifically, our method strengthens visual representations through image-aware augmentation while pruning visual tokens to construct latent-space negative samples that correct internal biases. Integrating Synergistic Visual Calibration and Causal Representation Calibration modules, the approach achieves an average absolute improvement of 2% in POPE accuracy on LLaVA-1.5 with only a 1.06× increase in inference latency, substantially alleviating object hallucination.

Technology Category

Application Category

📝 Abstract
Current training-free methods tackle MLLM hallucination with separate strategies: either enhancing visual signals or suppressing text inertia. However, these separate methods are insufficient due to critical trade-offs: simply enhancing vision often fails against strong language prior, while suppressing language can introduce extra image-irrelevant noise. Moreover, we find their naive combination is also ineffective, necessitating a unified framework. We propose such a framework by focusing on the core asset: the vision token. Our design leverages two key insights: (1) augmented images offer complementary visual semantics, and (2) removing vision tokens (information-gap) isolates hallucination tendencies more precisely than distorting images (modality-gap). Based on these, our framework uses vision tokens in two distinct ways, both operating on latent representations: our Synergistic Visual Calibration (SVC) module incorporates augmented tokens to strengthen visual representations, while our Causal Representation Calibration (CRC) module uses pruned tokens to create latent-space negative samples for correcting internal model biases. By harmonizing these two roles, our framework effectively restores the vision-language balance, significantly reducing object hallucinations, improving POPE accuracy by an average of 2% absolute on LLaVA-1.5 across multiple benchmarks with only a 1.06x inference latency overhead.
Problem

Research questions and friction points this paper is trying to address.

MLLM hallucination
vision token
visual-text balance
object hallucination
training-free methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision token manipulation
hallucination mitigation
multimodal large language models
visual calibration
causal representation calibration
🔎 Similar Papers
No similar papers found.