Qwen Look Again: Guiding Vision-Language Reasoning Models to Re-attention Visual Information

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language reasoning models (VLRMs) suffer from visual attention decay during long-chain reasoning, leading to frequent hallucinations; pure textual reflection proves insufficient for mitigation. Method: We propose a vision–text co-reflective mechanism featuring novel visual token copy-and-route (COPY/ROUTE) operations to dynamically refocus critical visual information during inference, coupled with a balanced reflective policy optimization (BRPO) reinforcement learning framework—first theoretically establishing and empirically validating an exponential degradation law of visual attention with respect to reasoning steps—and integrating multi-stage training with adaptive reflection triggering. Contribution/Results: Our approach achieves state-of-the-art performance across multiple visual question answering benchmarks, significantly reducing hallucination rates. The code is publicly released.

Technology Category

Application Category

📝 Abstract
Inference time scaling drives extended reasoning to enhance the performance of Vision-Language Models (VLMs), thus forming powerful Vision-Language Reasoning Models (VLRMs). However, long reasoning dilutes visual tokens, causing visual information to receive less attention and may trigger hallucinations. Although introducing text-only reflection processes shows promise in language models, we demonstrate that it is insufficient to suppress hallucinations in VLMs. To address this issue, we introduce Qwen-LookAgain (Qwen-LA), a novel VLRM designed to mitigate hallucinations by incorporating a vision-text reflection process that guides the model to re-attention visual information during reasoning. We first propose a reinforcement learning method Balanced Reflective Policy Optimization (BRPO), which guides the model to decide when to generate vision-text reflection on its own and balance the number and length of reflections. Then, we formally prove that VLRMs lose attention to visual tokens as reasoning progresses, and demonstrate that supplementing visual information during reflection enhances visual attention. Therefore, during training and inference, Visual Token COPY and Visual Token ROUTE are introduced to force the model to re-attention visual information at the visual level, addressing the limitations of text-only reflection. Experiments on multiple visual QA datasets and hallucination metrics indicate that Qwen-LA achieves leading accuracy performance while reducing hallucinations. Our code is available at: https://github.com/Liar406/Look_Again.
Problem

Research questions and friction points this paper is trying to address.

Mitigate hallucinations in Vision-Language Reasoning Models
Address attention dilution of visual tokens during reasoning
Enhance visual attention with vision-text reflection process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning guides vision-text reflection
Visual Token COPY enhances visual attention
Visual Token ROUTE balances reflection length
🔎 Similar Papers
No similar papers found.
X
Xu Chu
Peking University, Beijing, China
X
Xinrong Chen
Peking University, Beijing, China
G
Guanyu Wang
Peking University, Beijing, China
Z
Zhijie Tan
Peking University, Beijing, China
Kui Huang
Kui Huang
baidu
W
Wenyu Lv
Baidu Inc., Beijing, China
Tong Mo
Tong Mo
AI Research Engineer at Huawei Canada
Reinforcement LearningKeywork Spotting
W
Weiping Li
Peking University, Beijing, China