Locate-then-Sparsify: Attribution Guided Sparse Strategy for Visual Hallucination Mitigation

πŸ“… 2026-03-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the susceptibility of large vision-language models (LVLMs) to hallucination during generation, which undermines their reliability. To mitigate this issue, the authors propose LTS-FS, a plug-and-play framework that introduces, for the first time, a layer-wise sparsity control strategy grounded in causal intervention-based attribution. By quantifying the causal contribution of each network layer to hallucinatory outputs, LTS-FS dynamically modulates the strength of feature guidance and applies precise interventions only to layers highly associated with hallucination. This targeted approach avoids perturbing irrelevant layers, thereby effectively suppressing hallucinations while preserving the model’s performance on general tasks. Extensive experiments demonstrate that LTS-FS significantly alleviates hallucination across multiple LVLMs and benchmarks without requiring any retraining.

Technology Category

Application Category

πŸ“ Abstract
Despite the significant advancements in Large Vision-Language Models (LVLMs), their tendency to generate hallucinations undermines reliability and restricts broader practical deployment. Among the hallucination mitigation methods, feature steering emerges as a promising approach that reduces erroneous outputs in LVLMs without increasing inference costs. However, current methods apply uniform feature steering across all layers. This heuristic strategy ignores inter-layer differences, potentially disrupting layers unrelated to hallucinations and ultimately leading to performance degradation on general tasks. In this paper, we propose a plug-and-play framework called Locate-Then-Sparsify for Feature Steering (LTS-FS), which controls the steering intensity according to the hallucination relevance of each layer. We first construct a synthetic dataset comprising token-level and sentence-level hallucination cases. Based on this dataset, we introduce an attribution method based on causal interventions to quantify the hallucination relevance of each layer. With the attribution scores across layers, we propose a layerwise strategy that converts these scores into feature steering intensities for individual layers, enabling more precise adjustments specifically on hallucination-relevant layers. Extensive experiments across multiple LVLMs and benchmarks demonstrate that our LTS-FS framework effectively mitigates hallucination while preserving strong performance.
Problem

Research questions and friction points this paper is trying to address.

visual hallucination
feature steering
layer relevance
attribution
Large Vision-Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

feature steering
hallucination mitigation
attribution method
causal intervention
layerwise sparsification
πŸ”Ž Similar Papers
No similar papers found.
T
TianTian Dang
School of Advanced Interdisciplinary Sciences, University of Chinese Academy of Sciences; State Key Lab. of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
C
Chao Bi
State Key Lab. of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
S
Shufan Shen
State Key Lab. of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences
Jinzhe Liu
Jinzhe Liu
Ph.D. student, Institute of Computing Technology, Chinese Academy of Sciences
MultimodalLarge language modelKnowledge editing
Qingming Huang
Qingming Huang
University of the Chinese Academy of Sciences
Multimedia Analysis and RetrievalImage and Video ProcessingPattern RecognitionComputer VisionVideo Coding
S
Shuhui Wang
School of Advanced Interdisciplinary Sciences, University of Chinese Academy of Sciences; State Key Lab. of AI Safety, Institute of Computing Technology, Chinese Academy of Sciences