π€ AI Summary
This work addresses the susceptibility of large vision-language models (LVLMs) to hallucination during generation, which undermines their reliability. To mitigate this issue, the authors propose LTS-FS, a plug-and-play framework that introduces, for the first time, a layer-wise sparsity control strategy grounded in causal intervention-based attribution. By quantifying the causal contribution of each network layer to hallucinatory outputs, LTS-FS dynamically modulates the strength of feature guidance and applies precise interventions only to layers highly associated with hallucination. This targeted approach avoids perturbing irrelevant layers, thereby effectively suppressing hallucinations while preserving the modelβs performance on general tasks. Extensive experiments demonstrate that LTS-FS significantly alleviates hallucination across multiple LVLMs and benchmarks without requiring any retraining.
π Abstract
Despite the significant advancements in Large Vision-Language Models (LVLMs), their tendency to generate hallucinations undermines reliability and restricts broader practical deployment. Among the hallucination mitigation methods, feature steering emerges as a promising approach that reduces erroneous outputs in LVLMs without increasing inference costs. However, current methods apply uniform feature steering across all layers. This heuristic strategy ignores inter-layer differences, potentially disrupting layers unrelated to hallucinations and ultimately leading to performance degradation on general tasks. In this paper, we propose a plug-and-play framework called Locate-Then-Sparsify for Feature Steering (LTS-FS), which controls the steering intensity according to the hallucination relevance of each layer. We first construct a synthetic dataset comprising token-level and sentence-level hallucination cases. Based on this dataset, we introduce an attribution method based on causal interventions to quantify the hallucination relevance of each layer. With the attribution scores across layers, we propose a layerwise strategy that converts these scores into feature steering intensities for individual layers, enabling more precise adjustments specifically on hallucination-relevant layers. Extensive experiments across multiple LVLMs and benchmarks demonstrate that our LTS-FS framework effectively mitigates hallucination while preserving strong performance.