You See it, You Got it: Learning 3D Creation on Pose-Free Videos at Scale

πŸ“… 2024-12-09
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 7
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the problem of learning generalizable 3D priors from large-scale, pose-unlabeled web videos for open-world, vision-driven 3D generation. Methodologically: (1) it replaces conventional camera pose inputs with purely 2D visual conditioning signals; (2) it introduces WebVi3Dβ€”the first large-scale, video-derived multi-view dataset containing 320M frames; and (3) it proposes See3D, a vision-conditioned 3D diffusion model incorporating temporal-aware noise masking, deformation-guided 3D synthesis, and geometric warping. The key contribution is the first fully self-supervised, large-scale 3D modeling framework that requires neither camera poses nor 3D ground truth supervision. Empirically, See3D achieves zero-shot state-of-the-art performance on single-image and sparse-view reconstruction tasks, significantly outperforming existing methods reliant on costly 3D annotations. This demonstrates the effectiveness, generalizability, and scalability of video-based self-supervised learning for 3D prior acquisition.

Technology Category

Application Category

πŸ“ Abstract
Recent 3D generation models typically rely on limited-scale 3D `gold-labels' or 2D diffusion priors for 3D content creation. However, their performance is upper-bounded by constrained 3D priors due to the lack of scalable learning paradigms. In this work, we present See3D, a visual-conditional multi-view diffusion model trained on large-scale Internet videos for open-world 3D creation. The model aims to Get 3D knowledge by solely Seeing the visual contents from the vast and rapidly growing video data -- You See it, You Got it. To achieve this, we first scale up the training data using a proposed data curation pipeline that automatically filters out multi-view inconsistencies and insufficient observations from source videos. This results in a high-quality, richly diverse, large-scale dataset of multi-view images, termed WebVi3D, containing 320M frames from 16M video clips. Nevertheless, learning generic 3D priors from videos without explicit 3D geometry or camera pose annotations is nontrivial, and annotating poses for web-scale videos is prohibitively expensive. To eliminate the need for pose conditions, we introduce an innovative visual-condition - a purely 2D-inductive visual signal generated by adding time-dependent noise to the masked video data. Finally, we introduce a novel visual-conditional 3D generation framework by integrating See3D into a warping-based pipeline for high-fidelity 3D generation. Our numerical and visual comparisons on single and sparse reconstruction benchmarks show that See3D, trained on cost-effective and scalable video data, achieves notable zero-shot and open-world generation capabilities, markedly outperforming models trained on costly and constrained 3D datasets. Please refer to our project page at: https://vision.baai.ac.cn/see3d
Problem

Research questions and friction points this paper is trying to address.

Learning 3D creation from pose-free videos at scale
Overcoming limitations of constrained 3D priors in generation models
Eliminating need for expensive camera pose annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale video training for 3D creation
Pose-free learning with 2D-inductive signals
Warping-based pipeline for high-fidelity generation
πŸ”Ž Similar Papers