HalluRNN: Mitigating Hallucinations via Recurrent Cross-Layer Reasoning in Large Vision-Language Models

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate visually ungrounded hallucinations—plausible yet image-inconsistent textual outputs. Method: We propose HalluRNN, an architecture-level solution centered on the Dual-Gated Deep Propagation Unit (DG-DPU), a novel, lightweight, general-purpose, and task-agnostic inter-layer recurrent reasoning module. DG-DPU enables gated cross-layer hidden-state sharing and recursive optimization, requiring fine-tuning only of this module to enhance representational consistency and output reliability. Contribution/Results: Experiments demonstrate that HalluRNN substantially reduces hallucination rates across multiple hallucination evaluation benchmarks while improving model robustness and generalization. Its computational and parameter overhead is significantly lower than full-model fine-tuning or data-augmentation-based approaches. HalluRNN thus establishes an efficient, scalable new paradigm for hallucination mitigation in LVLMs.

Technology Category

Application Category

📝 Abstract
Though Large Vision-Language Models (LVLMs) have achieved remarkable performance across various tasks, they are still prone to hallucinations-generating outputs that are textually plausible but visually ungrounded. While prior approaches generally address this issue through data-centric fine-tuning or innovative decoding strategies, these methods often require substantial resources or task-specific configurations. In this work, we introduce an architecture-level solution, HalluRNN, which enhances model stability through recurrent cross-layer reasoning. Specifically, we propose a novel Dual-Gated Depth Propagation Unit (DG-DPU) module, which is shared across layers and recurrently refines hidden states. This allows for the adaptive propagation of information throughout the model, enforces consistency across layers, and mitigates hallucinations caused by representational drift. By fine-tuning only the DG-DPU module, HalluRNN achieves strong and robust performance across multiple benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Mitigating hallucinations in Large Vision-Language Models
Reducing resource-heavy fine-tuning for LVLM stability
Addressing representational drift via cross-layer reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recurrent cross-layer reasoning for stability
Dual-Gated Depth Propagation Unit module
Adaptive information propagation across layers
L
Le Yu
College of Computer Science, Sichuan University, China
Kaishen Wang
Kaishen Wang
University of Maryland
Machine LearningDeep Learning
J
Jianlong Xiong
College of Computer Science, Sichuan University, China
Y
Yue Cao
College of Computer Science, Sichuan University, China
T
Tao He
College of Computer Science, Sichuan University, China