More Thought, Less Accuracy? On the Dual Nature of Reasoning in Vision-Language Models

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a duality dilemma in vision-language models (VLMs): enhancing logical reasoning improves performance on complex tasks but degrades fundamental visual recognition—termed “the more reasoning, the worse the perception.” To address this, we propose Vision-Anchored Policy Optimization (VAPO), a method that integrates visual-attention constraints into the Group Relative Policy Optimization framework via reinforcement learning. VAPO explicitly steers the model to generate visual-grounded reasoning paths, thereby mitigating visual forgetting. Evaluated on multiple mainstream benchmarks, VAPO-Thinker-7B achieves state-of-the-art performance: it significantly improves basic visual recognition accuracy while preserving—and even enhancing—complex multimodal reasoning capabilities. Crucially, VAPO is the first approach to achieve synergistic optimization of logical reasoning and perceptual grounding at the training paradigm level, reconciling these traditionally competing objectives.

Technology Category

Application Category

📝 Abstract
Reasoning has emerged as a pivotal capability in Large Language Models (LLMs). Through Reinforcement Learning (RL), typically Group Relative Policy Optimization (GRPO), these models are able to solve complex tasks such as mathematics and code generation. Building on these advances, recent research has sought to extend reasoning to Vision-Language Models (VLMs), yielding promising results across diverse visual tasks. Despite this progress, our study uncovers the dual nature of multimodal reasoning: while it substantially enhances logical inference and facilitates performance on challenging problems, it may gradually impair perceptual grounding, leading to recognition failures on otherwise basic visual questions. Through further analysis, we attribute this phenomenon to visual forgetting, wherein prolonged reasoning causes the model to increasingly disregard visual input. To address this, we propose Vision-Anchored Policy Optimization (VAPO), a simple yet effective method that explicitly steers the reasoning process toward visually grounded trajectories. Our result model, VAPO-Thinker-7B, significantly strengthens the model's reliance on visual information and achieves new state-of-the-art results on a wide range of established benchmarks. Project page: https://xytian1008.github.io/VAPO/
Problem

Research questions and friction points this paper is trying to address.

Multimodal reasoning impairs visual perception in VLMs
Visual forgetting occurs during prolonged reasoning processes
Anchoring reasoning to visual inputs improves model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Anchored Policy Optimization method enhances visual grounding
Steers reasoning process toward visually grounded trajectories
Strengthens model reliance on visual information
🔎 Similar Papers
No similar papers found.
X
Xinyu Tian
Australian National University
S
Shu Zou
Maincode
Zhaoyuan Yang
Zhaoyuan Yang
GE Research
Machine LearningComputer VisionEdge ComputingRobotics
M
Mengqi He
Australian National University
F
Fabian Waschkowski
University of Melbourne
L
Lukas Wesemann
University of Melbourne
Peter Tu
Peter Tu
General Electric
Computer VisionArtificial intelligence
J
Jing Zhang
Australian National University