Wan-R1: Verifiable-Reinforcement Learning for Video Reasoning

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video generation models exhibit limited capabilities in spatial reasoning and multi-step planning tasks, while reinforcement learning approaches are often constrained by the design of reward functions. This work introduces Group Relative Policy Optimization (GRPO) into flow-based video generation models and proposes two novel reward mechanisms: a multi-component trajectory reward tailored for structured game environments and an embedding-level verifiable reward designed for robotic navigation. The study systematically demonstrates, for the first time, the critical role of verifiable rewards in stabilizing training dynamics. Experimental results show that the proposed method significantly enhances generalization in video-based reasoning, achieving absolute improvements of 29.1% and 51.4% in exact match accuracy over supervised fine-tuning baselines on 3D maze-solving and obstacle-avoidance tasks, respectively.
📝 Abstract
Video generation models produce visually coherent content but struggle with tasks requiring spatial reasoning and multi-step planning. Reinforcement learning (RL) offers a path to improve generalization, but its effectiveness in video reasoning hinges on reward design -- a challenge that has received little systematic study. We investigate this problem by adapting Group Relative Policy Optimization (GRPO) to flow-based video models and training them on maze-solving and robotic navigation tasks. We first show that multimodal reward models fail catastrophically in this setting. To address this, we design verifiable reward functions grounded in objective task metrics. For structured game environments, we introduce a multi-component trajectory reward. For robotic navigation, we propose an embedding-level verifiable reward. Our experiments show that RL fine-tuning with verifiable rewards improves generalization. For example, on complex 3D mazes, our model improves exact match accuracy by 29.1\% over the SFT baseline, and on trap-avoidance tasks by 51.4\%. Our systematic reward analysis reveals that verifiable rewards are critical for stable training, while multimodal reward models could lead to degenerate solutions. These findings establish verifiable reward design as a key enabler for robust video reasoning. Code will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

video reasoning
reinforcement learning
reward design
spatial reasoning
multi-step planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

verifiable reward
reinforcement learning
video reasoning
GRPO
reward design
🔎 Similar Papers
No similar papers found.