🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate visually ungrounded hallucinations—plausible yet image-inconsistent textual outputs. Method: We propose HalluRNN, an architecture-level solution centered on the Dual-Gated Deep Propagation Unit (DG-DPU), a novel, lightweight, general-purpose, and task-agnostic inter-layer recurrent reasoning module. DG-DPU enables gated cross-layer hidden-state sharing and recursive optimization, requiring fine-tuning only of this module to enhance representational consistency and output reliability. Contribution/Results: Experiments demonstrate that HalluRNN substantially reduces hallucination rates across multiple hallucination evaluation benchmarks while improving model robustness and generalization. Its computational and parameter overhead is significantly lower than full-model fine-tuning or data-augmentation-based approaches. HalluRNN thus establishes an efficient, scalable new paradigm for hallucination mitigation in LVLMs.
📝 Abstract
Though Large Vision-Language Models (LVLMs) have achieved remarkable performance across various tasks, they are still prone to hallucinations-generating outputs that are textually plausible but visually ungrounded. While prior approaches generally address this issue through data-centric fine-tuning or innovative decoding strategies, these methods often require substantial resources or task-specific configurations. In this work, we introduce an architecture-level solution, HalluRNN, which enhances model stability through recurrent cross-layer reasoning. Specifically, we propose a novel Dual-Gated Depth Propagation Unit (DG-DPU) module, which is shared across layers and recurrently refines hidden states. This allows for the adaptive propagation of information throughout the model, enforces consistency across layers, and mitigates hallucinations caused by representational drift. By fine-tuning only the DG-DPU module, HalluRNN achieves strong and robust performance across multiple benchmarks.