🤖 AI Summary
Current spatial intelligence models are constrained by the scale, diversity, and real-world dynamism of training data—particularly the scarcity of large-scale,野外 video datasets with dense 3D annotations. To address this, we introduce the first ultra-large-scale, temporally continuous dynamic video dataset comprising 7,089 hours of footage, spanning diverse real-world scenes and capturing authentic camera motion. We propose a hierarchical filtering pipeline to select 2.7 million high-quality clips from 21,000 hours of raw video. Furthermore, we design an automated annotation pipeline that generates multimodal, frame-level dense annotations—including camera poses, depth maps, motion masks, and structured motion instructions. This dataset surpasses existing benchmarks in scale, annotation density, and scene diversity. Empirical evaluation demonstrates substantial improvements in the generalization and scalability of video understanding and 3D vision models on realistic, unconstrained scenarios.
📝 Abstract
Significant progress has been made in spatial intelligence, spanning both spatial reconstruction and world exploration. However, the scalability and real-world fidelity of current models remain severely constrained by the scarcity of large-scale, high-quality training data. While several datasets provide camera pose information, they are typically limited in scale, diversity, and annotation richness, particularly for real-world dynamic scenes with ground-truth camera motion. To this end, we collect extbf{SpatialVID}, a dataset consists of a large corpus of in-the-wild videos with diverse scenes, camera movements and dense 3D annotations such as per-frame camera poses, depth, and motion instructions. Specifically, we collect more than 21,000 hours of raw video, and process them into 2.7 million clips through a hierarchical filtering pipeline, totaling 7,089 hours of dynamic content. A subsequent annotation pipeline enriches these clips with detailed spatial and semantic information, including camera poses, depth maps, dynamic masks, structured captions, and serialized motion instructions. Analysis of SpatialVID's data statistics reveals a richness and diversity that directly foster improved model generalization and performance, establishing it as a key asset for the video and 3D vision research community.