KVSmooth: Mitigating Hallucination in Multi-modal Large Language Models through Key-Value Smoothing

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of visually inconsistent hallucinations in multimodal large language models during long-sequence generation, which often arise from semantic drift. To mitigate this, the authors propose KVSmooth, a training-free and architecture-agnostic method that adaptively smooths keys and values in the KV cache during inference. KVSmooth introduces attention entropy as a dynamic signal to modulate smoothing intensity, leveraging exponential moving averages for plug-and-play hallucination suppression. Experimental results demonstrate a significant reduction in hallucination, with the CHAIR_S score decreasing from 41.8 to 18.2, while simultaneously improving generation quality—evidenced by an increase in F1 score from 77.5 to 79.2—thereby enhancing both precision and recall without requiring model retraining or structural modifications.

Technology Category

Application Category

📝 Abstract
Despite the significant progress of Multimodal Large Language Models (MLLMs) across diverse tasks, hallucination -- corresponding to the generation of visually inconsistent objects, attributes, or relations -- remains a major obstacle to their reliable deployment. Unlike pure language models, MLLMs must ground their generation process in visual inputs. However, existing models often suffer from semantic drift during decoding, causing outputs to diverge from visual facts as the sequence length increases. To address this issue, we propose KVSmooth, a training-free and plug-and-play method that mitigates hallucination by performing attention-entropy-guided adaptive smoothing on hidden states. Specifically, KVSmooth applies an exponential moving average (EMA) to both keys and values in the KV-Cache, while dynamically quantifying the sink degree of each token through the entropy of its attention distribution to adaptively adjust the smoothing strength. Unlike computationally expensive retraining or contrastive decoding methods, KVSmooth operates efficiently during inference without additional training or model modification. Extensive experiments demonstrate that KVSmooth significantly reduces hallucination ($\mathit{CHAIR}_{S}$ from $41.8 \rightarrow 18.2$) while improving overall performance ($F_1$ score from $77.5 \rightarrow 79.2$), achieving higher precision and recall simultaneously. In contrast, prior methods often improve one at the expense of the other, validating the effectiveness and generality of our approach.
Problem

Research questions and friction points this paper is trying to address.

hallucination
multimodal large language models
semantic drift
visual grounding
KV-Cache
Innovation

Methods, ideas, or system contributions that make the work stand out.

KVSmooth
hallucination mitigation
key-value smoothing
attention entropy
multimodal large language models
🔎 Similar Papers
No similar papers found.
S
Siyu Jiang
Huazhong University of Science and Technology
F
Feiyang Chen
Huazhong University of Science and Technology
X
Xiaojin Zhang
Huazhong University of Science and Technology
Kun He
Kun He
Professor, Huazhong University of Science and Technology
AI SecurityGraph data miningOptimizationDeep learningAI4Sci