From Sight to Insight: Improving Visual Reasoning Capabilities of Multimodal Models via Reinforcement Learning

📅 2026-01-01
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited performance of multimodal large language models (MLLMs) in visual reasoning tasks, often stemming from insufficient integration of visual information. The authors propose an unsupervised reinforcement learning approach that explicitly guides open-source models to generate longer, structured chains of visual reasoning, thereby preventing the tendency to bypass visual inputs. This is achieved through six multidimensional reward functions targeting image understanding, reasoning steps, and answer accuracy, combined with Group Relative Policy Optimization (GRPO). Evaluated on Qwen-2.5-VL-7B, the method yields a 5.56% improvement over the baseline, with consistent gains across both in-domain and out-of-domain tasks. Furthermore, converting images into textual descriptions boosts the performance of Claude 3.5 and 3.7 by 26.7% and 23.6%, respectively.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has emerged as a promising approach for eliciting reasoning chains before generating final answers. However, multimodal large language models (MLLMs) generate reasoning that lacks integration of visual information. This limits their ability to solve problems that demand accurate visual perception, such as visual puzzles. We show that visual perception is the key bottleneck in such tasks: converting images into textual descriptions significantly improves performance, yielding gains of 26.7% for Claude 3.5 and 23.6% for Claude 3.7. To address this, we investigate reward-driven RL as a mechanism to unlock long visual reasoning in open-source MLLMs without requiring costly supervision. We design and evaluate six reward functions targeting different reasoning aspects, including image understanding, thinking steps, and answer accuracy. Using group relative policy optimization (GRPO), our approach explicitly incentivizes longer, structured reasoning and mitigates bypassing of visual information. Experiments on Qwen-2.5-VL-7B achieve 5.56% improvements over the base model, with consistent gains across both in-domain and out-of-domain settings.
Problem

Research questions and friction points this paper is trying to address.

visual reasoning
multimodal large language models
visual perception
reasoning chains
visual information integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement learning
multimodal large language models
visual reasoning
reward design
GRPO
🔎 Similar Papers