VLM-R$^3$: Region Recognition, Reasoning, and Refinement for Enhanced Multimodal Chain-of-Thought

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reasoning-based multimodal large language models (MLLMs) excel at generating long reasoning chains but struggle to dynamically attend to and iteratively re-examine visual regions, resulting in imprecise alignment between textual reasoning and visual evidence. To address this, we propose Region-Conditioned Guided Reinforcement Policy Optimization (R-GRPO), enabling the model to autonomously determine when to supplement visual evidence, where to focus spatially, and how to fuse sub-image information. We further introduce VLIR—the first fine-grained vision-language interleaved reasoning corpus—designed to support visual-grounded, stepwise reasoning modeling. Our approach integrates MLLMs, reinforcement learning, differentiable visual region cropping/scaling, and fine-grained visual grounding supervision. Evaluated on MathVista and ScienceQA under zero-shot and few-shot settings, R-GRPO establishes new state-of-the-art performance, with particularly substantial gains in spatial reasoning and fine-grained visual cue identification tasks.

Technology Category

Application Category

📝 Abstract
Recently, reasoning-based MLLMs have achieved a degree of success in generating long-form textual reasoning chains. However, they still struggle with complex tasks that necessitate dynamic and iterative focusing on and revisiting of visual regions to achieve precise grounding of textual reasoning in visual evidence. We introduce extbf{VLM-R$^3$} ( extbf{V}isual extbf{L}anguage extbf{M}odel with extbf{R}egion extbf{R}ecognition and extbf{R}easoning), a framework that equips an MLLM with the ability to (i) decide emph{when} additional visual evidence is needed, (ii) determine emph{where} to ground within the image, and (iii) seamlessly weave the relevant sub-image content back into an interleaved chain-of-thought. The core of our method is extbf{Region-Conditioned Reinforcement Policy Optimization (R-GRPO)}, a training paradigm that rewards the model for selecting informative regions, formulating appropriate transformations (e.g. crop, zoom), and integrating the resulting visual context into subsequent reasoning steps. To bootstrap this policy, we compile a modest but carefully curated Visuo-Lingual Interleaved Rationale (VLIR) corpus that provides step-level supervision on region selection and textual justification. Extensive experiments on MathVista, ScienceQA, and other benchmarks show that VLM-R$^3$ sets a new state of the art in zero-shot and few-shot settings, with the largest gains appearing on questions demanding subtle spatial reasoning or fine-grained visual cue extraction.
Problem

Research questions and friction points this paper is trying to address.

Enhancing visual grounding in reasoning tasks
Dynamic region selection for visual evidence
Integrating visual context into reasoning chains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic visual region focusing for precise grounding
Region-Conditioned Reinforcement Policy Optimization (R-GRPO)
Visuo-Lingual Interleaved Rationale (VLIR) corpus
🔎 Similar Papers
No similar papers found.
Chaoya Jiang
Chaoya Jiang
Shandong University
Multimodal Large Language Model
W
Wei Ye
National Engineering Research Center for Software Engineering, Peking University
H
Han Yang
ZEEKR Intelligent Technology Holding Limited
H
Haiyang Xu
Alibaba Group
M
Ming Yan
Alibaba Group
J
Ji Zhang
Alibaba Group
F
Fei Huang
Alibaba Group
Shikun Zhang
Shikun Zhang
北京大学