Towards Interpretable Hallucination Analysis and Mitigation in LVLMs via Contrastive Neuron Steering

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the pervasive hallucination problem in large vision-language models (LVLMs), which existing approaches often mitigate only at the output layer without probing the underlying representational mechanisms. For the first time, this study investigates hallucination origins at the neuronal level by applying sparse autoencoders to decompose visual embeddings, thereby identifying interpretable neurons correlated with actual image content. Building on this insight, the authors propose a contrastive neuron modulation strategy that, during the prefilling stage, enhances activation of content-relevant neurons while suppressing irrelevant ones to reduce hallucinatory outputs. Evaluated across multiple hallucination-specific and general multimodal benchmarks, the method significantly lowers hallucination rates while preserving or even improving the model’s visual-semantic comprehension capabilities, offering an interpretable, controllable, and effective solution to hallucination mitigation.

Technology Category

Application Category

📝 Abstract
LVLMs achieve remarkable multimodal understanding and generation but remain susceptible to hallucinations. Existing mitigation methods predominantly focus on output-level adjustments, leaving the internal mechanisms that give rise to these hallucinations largely unexplored. To gain a deeper understanding, we adopt a representation-level perspective by introducing sparse autoencoders (SAEs) to decompose dense visual embeddings into sparse, interpretable neurons. Through neuron-level analysis, we identify distinct neuron types, including always-on neurons and image-specific neurons. Our findings reveal that hallucinations often result from disruptions or spurious activations of image-specific neurons, while always-on neurons remain largely stable. Moreover, selectively enhancing or suppressing image-specific neurons enables controllable intervention in LVLM outputs, improving visual grounding and reducing hallucinations. Building on these insights, we propose Contrastive Neuron Steering (CNS), which identifies image-specific neurons via contrastive analysis between clean and noisy inputs. CNS selectively amplifies informative neurons while suppressing perturbation-induced activations, producing more robust and semantically grounded visual representations. This not only enhances visual understanding but also effectively mitigates hallucinations. By operating at the prefilling stage, CNS is fully compatible with existing decoding-stage methods. Extensive experiments on both hallucination-focused and general multimodal benchmarks demonstrate that CNS consistently reduces hallucinations while preserving overall multimodal understanding.
Problem

Research questions and friction points this paper is trying to address.

hallucination
large vision-language models
interpretable neurons
visual grounding
multimodal understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive Neuron Steering
sparse autoencoders
interpretable neurons
hallucination mitigation
visual grounding
🔎 Similar Papers
No similar papers found.
G
Guangtao Lyu
School of Electronic Engineering, Xidian University, Xi’an, China
X
Xinyi Cheng
School of Computer Science and Technology, Xidian University, Xi’an, China
Q
Qi Liu
School of Electronic Engineering, Xidian University, Xi’an, China
Chenghao Xu
Chenghao Xu
EPFL
RoboticsDynamic SLAMActive Vision
J
Jiexi Yan
School of Computer Science and Technology, Xidian University, Xi’an, China
Muli Yang
Muli Yang
Institute for Infocomm Research (I2R), A*STAR, Singapore
Computer VisionMachine LearningOpen-World LearningMultimodal Modeling
F
Fen Fang
Institute for Infocomm Research, A*STAR, Singapore
Cheng Deng
Cheng Deng
University of Edinburgh
On-device LLMNLPGeoAI