Structured Over Scale: Learning Spatial Reasoning from Educational Video

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models perform well on standard video understanding benchmarks but exhibit systematic deficiencies in fundamental reasoning tasks such as counting, spatial reasoning, and compositional understanding. This work proposes leveraging the inherent instructional structure of educational videos—specifically the context–question–pause–answer pattern—as a supervisory signal, revealing for the first time the critical role of content structure in enhancing models’ basic reasoning capabilities. Using the automatically constructed DoraVQA dataset and fine-tuning Qwen2/Qwen3 models with Group Relative Policy Optimization (GRPO), the approach achieves substantial gains in generalization with only a small number of highly structured videos. It yields 8–14 percentage point improvements on DoraVQA, attains state-of-the-art performance on CVBench (86.16%), and demonstrates strong transferability on Video-MME and NExT-QA.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) demonstrate impressive performance on standard video understanding benchmarks yet fail systematically on simple reasoning tasks that preschool children can solve, including counting, spatial reasoning, and compositional understanding. We hypothesize that the pedagogically-structured content of educational videos provides an ideal training signal for improving these capabilities. We introduce DoraVQA, a dataset of 5,344 question-answer pairs automatically extracted from 8 seasons of Dora the Explorer with precise timestamp alignment. Each episode follows a consistent \textit{context-question-pause-answer} structure that creates a self-contained learning environment analogous to interactive tutoring. We fine-tune both Qwen2 and Qwen3 using Group Relative Policy Optimization (GRPO), leveraging the clear correctness signals and structured reasoning traces inherent in educational content. Despite training exclusively on 38 hours of children's educational videos, our approach achieves improvements of 8-14 points on DoraVQA and state-of-the-art 86.16\% on CVBench, with strong transfer to Video-MME and NExT-QA, demonstrating effective generalization from narrow pedagogical content to broad multimodal understanding. Through cross-domain benchmarks, we show that VLMs can perform tasks that require robust reasoning learned from structured educational content, suggesting that content structure matters as much as content scale.
Problem

Research questions and friction points this paper is trying to address.

spatial reasoning
vision-language models
compositional understanding
video understanding
reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured educational content
spatial reasoning
vision-language models
DoraVQA
Group Relative Policy Optimization
🔎 Similar Papers
No similar papers found.