Mitigating Hallucinations via Inter-Layer Consistency Aggregation in Large Vision-Language Models

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate vision-language outputs suffering from visual hallucinations—i.e., textual content inconsistent with the input image. Existing training-free mitigation methods exhibit unstable performance and high sensitivity to hyperparameters. Method: We propose DCLA, a training-free, fine-tuning-free, and external-knowledge-free decoding mechanism. DCLA dynamically aggregates hidden states from preceding layers to construct a semantic reference, identifies layers exhibiting semantic deviation, and rectifies them to enhance inter-layer consistency. Contribution/Results: DCLA introduces the first “inter-layer consistency-driven” decoding paradigm that requires no training or architectural modification. It demonstrates strong cross-model generalizability and hyperparameter robustness. Evaluated on MME, POPE, and other benchmarks, DCLA significantly reduces hallucination rates and improves output reliability and multi-task performance across diverse LVLMs—including LLaVA, Qwen-VL, and InternVL—validating its universal effectiveness.

Technology Category

Application Category

📝 Abstract
Despite the impressive capabilities of Large Vision-Language Models (LVLMs), they remain susceptible to hallucinations-generating content that is inconsistent with the input image. Existing training-free hallucination mitigation methods often suffer from unstable performance and high sensitivity to hyperparameter settings, limiting their practicality and broader adoption. In this paper, we propose a novel decoding mechanism, Decoding with Inter-layer Consistency via Layer Aggregation (DCLA), which requires no retraining, fine-tuning, or access to external knowledge bases. Specifically, our approach constructs a dynamic semantic reference by aggregating representations from previous layers, and corrects semantically deviated layers to enforce inter-layer consistency. The method allows DCLA to robustly mitigate hallucinations across multiple LVLMs. Experiments on hallucination benchmarks such as MME and POPE demonstrate that DCLA effectively reduces hallucinations while enhancing the reliability and performance of LVLMs.
Problem

Research questions and friction points this paper is trying to address.

Mitigating hallucinations in Large Vision-Language Models (LVLMs)
Improving unstable performance of training-free hallucination methods
Enhancing reliability of LVLMs without retraining or external knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic semantic reference via layer aggregation
Corrects deviated layers for inter-layer consistency
No retraining or external knowledge required