Devils in Middle Layers of Large Vision-Language Models: Interpreting, Detecting and Mitigating Object Hallucinations via Attention Lens

📅 2024-11-23
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the severe object hallucination problem in Large Vision-Language Models (LVLMs), revealing for the first time that its root cause lies in anomalous attention patterns within intermediate layers—specifically during the “vision-enrichment” and “semantic-refinement” stages. We propose a training-free, zero-parameter, inference-time intervention method based on multi-head attention map disentanglement and cross-head visual attention recalibration, enabling cross-model generalization. Key contributions include: (1) identifying the attention weight distribution in the vision-enrichment stage as a strong predictor of hallucination; and (2) designing a lightweight attention lens framework for efficient hallucination detection and correction. Our method significantly reduces object hallucination rates across mainstream LVLMs without architectural modification or fine-tuning. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Hallucinations in Large Vision-Language Models (LVLMs) significantly undermine their reliability, motivating researchers to explore the causes of hallucination. However, most studies primarily focus on the language aspect rather than the visual. In this paper, we address how LVLMs process visual information and whether this process causes hallucination. Firstly, we use the attention lens to identify the stages at which LVLMs handle visual data, discovering that the middle layers are crucial. Moreover, we find that these layers can be further divided into two stages: ''visual information enrichment'' and ''semantic refinement'' which respectively propagate visual data to object tokens and interpret it through text. By analyzing attention patterns during the visual information enrichment stage, we find that real tokens consistently receive higher attention weights than hallucinated ones, serving as a strong indicator of hallucination. Further examination of multi-head attention maps reveals that hallucination tokens often result from heads interacting with inconsistent objects. Based on these insights, we propose a simple inference-time method that adjusts visual attention by integrating information across various heads. Extensive experiments demonstrate that this approach effectively mitigates hallucinations in mainstream LVLMs without additional training costs. Code is available at https://github.com/ZhangqiJiang07/middle_layers_indicating_hallucinations.
Problem

Research questions and friction points this paper is trying to address.

Identify stages where LVLMs process visual data causing hallucinations
Analyze attention patterns to detect hallucination indicators in LVLMs
Mitigate hallucinations by adjusting visual attention during inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes middle layers via attention lens
Detects hallucinations using attention weights
Adjusts visual attention across heads
🔎 Similar Papers
No similar papers found.
Z
Zhangqi Jiang
National University of Defense Technology
J
Junkai Chen
Southeast University, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
Beier Zhu
Beier Zhu
Research Scientist, Nanyang Technological University
Robust Machine Learning
Tingjin Luo
Tingjin Luo
NUDT
Machine LearningComputer VisionData Mining
Y
Yankun Shen
Southeast University, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
X
Xu Yang
Southeast University, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China