Bridging Perception and Reasoning: Token Reweighting for RLVR in Multimodal LLMs

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in multimodal large language models where perceptual and reasoning tokens are highly coupled during reinforcement learning with verifiable rewards (RLVR), making it difficult to jointly optimize visual grounding and symbolic reasoning through separate token-level adjustments. To resolve this, the authors propose Token reWeighting (ToR), a plug-and-play strategy that explicitly models the interdependence between the two token types and dynamically identifies critical tokens for collaborative reweighting during RLVR training. ToR is compatible with existing algorithms such as GRPO and DAPO, leveraging experience-guided weight allocation to achieve joint optimization. Experiments demonstrate that ToR achieves state-of-the-art performance across multiple multimodal reasoning benchmarks while significantly improving both visual grounding accuracy and logical reasoning coherence.

Technology Category

Application Category

📝 Abstract
Extending Reinforcement Learning with Verifiable Rewards (RLVR) to multimodal large language models (MLLMs) faces a fundamental challenge: their responses inherently interleave perception-related tokens, which ground visual content, with reasoning-related tokens, which construct reasoning chains. These token types instantiate distinct yet interdependent capacities -- visual grounding and symbolic reasoning -- making isolated optimization insufficient. Through token-level empirical analysis, we demonstrate that optimizing either perception- or reasoning-only tokens consistently underperforms full optimization, underscoring their inherent coupling. To address this, we propose a plug-and-play Token-Reweighting (ToR) strategy that explicitly models this interdependence by identifying critical tokens of both types and dynamically reweighting them during RLVR training. Applied on top of existing methods (e.g., GRPO and DAPO), ToR delivers consistent performance gains across multiple multi-modal reasoning benchmarks, achieving state-of-the-art performance with both accurate visual grounding and coherent reasoning.
Problem

Research questions and friction points this paper is trying to address.

multimodal large language models
Reinforcement Learning with Verifiable Rewards
perception-reasoning coupling
token-level optimization
visual grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token Reweighting
Multimodal LLMs
Reinforcement Learning with Verifiable Rewards
Visual Grounding
Symbolic Reasoning
🔎 Similar Papers
No similar papers found.