System-Mediated Attention Imbalances Make Vision-Language Models Say Yes

πŸ“… 2026-01-18
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Vision-language models often exhibit a β€œyes”-bias hallucination due to imbalanced attention allocation among the system, image, and text modalities. This work identifies redundant attention in the system modality as a key contributor to this biasβ€”a factor overlooked by prior studies that focused solely on image-side attention. To address this, the authors propose a causal attention reallocation method that dynamically rebalances weights across the three modalities to strengthen reliance on both visual and textual inputs. Experimental results demonstrate that the proposed approach significantly mitigates β€œyes”-bias hallucination, outperforming existing methods across multiple benchmarks. Furthermore, the study reveals that coarse-grained representation dependence constitutes a critical mechanism underlying this bias.

Technology Category

Application Category

πŸ“ Abstract
Vision-language model (VLM) hallucination is commonly linked to imbalanced allocation of attention across input modalities: system, image and text. However, existing mitigation strategies tend towards an image-centric interpretation of these imbalances, often prioritising increased image attention while giving less consideration to the roles of the other modalities. In this study, we evaluate a more holistic, system-mediated account, which attributes these imbalances to functionally redundant system weights that reduce attention to image and textual inputs. We show that this framework offers a useful empirical perspective on the yes-bias, a common form of hallucination in which VLMs indiscriminately respond'yes'. Causally redistributing attention from the system modality to image and textual inputs substantially suppresses this bias, often outperforming existing approaches. We further present evidence suggesting that system-mediated attention imbalances contribute to the yes-bias by encouraging a default reliance on coarse input representations, which are effective for some tasks but ill-suited to others. Taken together, these findings firmly establish system attention as a key factor in VLM hallucination and highlight its potential as a lever for mitigation.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
hallucination
yes-bias
attention imbalance
system modality
Innovation

Methods, ideas, or system contributions that make the work stand out.

system-mediated attention
vision-language models
hallucination
yes-bias
attention redistribution
πŸ”Ž Similar Papers
No similar papers found.