Tinted Frames: Question Framing Blinds Vision-Language Models

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals that vision-language models exhibit inconsistent visual attention allocation across different linguistic framing of questions, leading to degraded visual reasoning performance and cross-framing instability. The study identifies that language framing induces selective occlusion in visual attention and, for the first time, attributes performance degradation primarily to misaligned attention distribution. To address this, the authors propose a lightweight tuning method based on learnable prompt tokens that steers the model toward more robust, vision-grounded attention patterns without altering its architecture. Experimental results demonstrate that this approach significantly enhances both visual grounding capability and overall accuracy across diverse question framings, showcasing strong generalizability.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have been shown to be blind, often underutilizing their visual inputs even on tasks that require visual reasoning. In this work, we demonstrate that VLMs are selectively blind. They modulate the amount of attention applied to visual inputs based on linguistic framing even when alternative framings demand identical visual reasoning. Using visual attention as a probe, we quantify how framing alters both the amount and distribution of attention over the image. Constrained framings, such as multiple choice and yes/no, induce substantially lower attention to image context compared to open-ended, reduce focus on task-relevant regions, and shift attention towards uninformative tokens. We further demonstrate that this attention misallocation is the principal cause of degraded accuracy and cross-framing inconsistency. Building on this mechanistic insight, we introduce a lightweight prompt-tuning method using learnable tokens that encourages the robust, visually grounded attention patterns observed in open-ended settings, improving visual grounding and improving performance across framings.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
question framing
visual attention
visual grounding
framing bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language models
question framing
visual attention
prompt tuning
visual grounding
🔎 Similar Papers
No similar papers found.