Reasoning via Video: The First Evaluation of Video Models'Reasoning Abilities through Maze-Solving Tasks

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether video generation models possess spatial planning and multi-step reasoning capabilities, focusing on maze-solving tasks. Method: We introduce VR-Bench, the first benchmark for video-based reasoning evaluation, comprising 7,920 procedurally generated, multi-style maze videos spanning five categories of complex spatial structures. Our evaluation framework integrates supervised fine-tuning, diverse test-time sampling strategies, and multi-scale maze design to systematically assess spatiotemporal reasoning fidelity. Results: Video generation models significantly outperform state-of-the-art vision-language models; diversity in test-time sampling improves performance by 10–20%; models exhibit strong generalization and consistent robustness across complexity levels. This study provides the first systematic empirical validation that video generation serves as an effective implicit mechanism for spatial reasoning, establishing a novel paradigm for evaluating cognitive capabilities in video foundation models.

Technology Category

Application Category

📝 Abstract
Video Models have achieved remarkable success in high-fidelity video generation with coherent motion dynamics. Analogous to the development from text generation to text-based reasoning in language modeling, the development of video models motivates us to ask: Can video models reason via video generation? Compared with the discrete text corpus, video grounds reasoning in explicit spatial layouts and temporal continuity, which serves as an ideal substrate for spatial reasoning. In this work, we explore the reasoning via video paradigm and introduce VR-Bench -- a comprehensive benchmark designed to systematically evaluate video models'reasoning capabilities. Grounded in maze-solving tasks that inherently require spatial planning and multi-step reasoning, VR-Bench contains 7,920 procedurally generated videos across five maze types and diverse visual styles. Our empirical analysis demonstrates that SFT can efficiently elicit the reasoning ability of video model. Video models exhibit stronger spatial perception during reasoning, outperforming leading VLMs and generalizing well across diverse scenarios, tasks, and levels of complexity. We further discover a test-time scaling effect, where diverse sampling during inference improves reasoning reliability by 10--20%. These findings highlight the unique potential and scalability of reasoning via video for spatial reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating video models' reasoning abilities through maze-solving tasks
Developing VR-Bench benchmark for spatial planning assessment
Investigating video generation for spatial reasoning and generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating video models' reasoning via maze-solving tasks
Using SFT to elicit spatial reasoning in video generation
Employing test-time scaling to enhance reasoning reliability