🤖 AI Summary
This work addresses the scarcity and high cost of annotated 3D scene data by proposing an automated data engine that leverages the vast, untapped resource of unlabeled internet videos to generate multi-granularity 3D training data. The approach integrates 3D reconstruction, vision–language alignment, and end-to-end training to enable joint learning from both human-annotated and synthetically generated data. It identifies key bottlenecks in unsupervised 3D data generation and demonstrates, for the first time, the effectiveness of web-scale videos across a broad spectrum of tasks—ranging from low-level perception (e.g., 3D object detection and instance segmentation) to high-level semantic reasoning (e.g., spatial visual question answering and vision-and-language navigation). The resulting models exhibit strong zero-shot performance and achieve further gains after fine-tuning.
📝 Abstract
Annotated 3D scene data is scarce and expensive to acquire, while abundant unlabeled videos are readily available on the internet. In this paper, we demonstrate that carefully designed data engines can leverage web-curated, unlabeled videos to automatically generate training data, to facilitate end-to-end models in 3D scene understanding alongside human-annotated datasets. We identify and analyze bottlenecks in automated data generation, revealing critical factors that determine the efficiency and effectiveness of learning from unlabeled data. To validate our approach across different perception granularities, we evaluate on three tasks spanning low-level perception, i.e., 3D object detection and instance segmentation, to high-evel reasoning, i.e., 3D spatial Visual Question Answering (VQA) and Vision-Lanugage Navigation (VLN). Models trained on our generated data demonstrate strong zero-shot performance and show further improvement after finetuning. This demonstrates the viability of leveraging readily available web data as a path toward more capable scene understanding systems.