🤖 AI Summary
This work addresses the lack of systematic investigation into video reasoning capabilities of current models, primarily hindered by the absence of large-scale, structured training and evaluation data. To bridge this gap, we introduce VBVR, a massive video reasoning dataset comprising 200 structured tasks and over one million video clips, organized via a task ontology–based taxonomy. We further develop VBVR-Bench, a rule-driven and verifiable evaluation framework that enables reproducible benchmarking of video reasoning. Our dataset exceeds existing resources by three orders of magnitude in scale and facilitates, for the first time, large-scale scaling experiments in video reasoning. These experiments reveal an early emergence of cross-task generalization on unseen tasks, highlighting promising avenues for future research in video understanding.
📝 Abstract
Rapid progress in video models has largely focused on visual quality, leaving their reasoning capabilities underexplored. Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over spatiotemporal structure such as continuity, interaction, and causality. However, systematically studying video reasoning and its scaling behavior is hindered by the lack of large-scale training data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks following a principled taxonomy and over one million video clips, approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities. Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning. The data, benchmark toolkit, and models are publicly available at https://video-reason.com/ .