Stereo4D: Learning How Things Move in 3D from Internet Stereo Videos

📅 2024-12-12
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Reconstructing dynamic 3D scenes from Internet-sourced stereo video remains challenging due to the absence of ground-truth motion annotations. Method: We propose the first end-to-end, motion-ground-truth-free framework for large-scale 4D (3D + time) scene reconstruction. Our approach jointly optimizes camera pose estimation, stereo depth prediction, and inter-frame motion tracking, enforcing multi-source geometric–motion consistency constraints and temporal smoothing. We extend DUSt3R to enable joint end-to-end structure-and-motion prediction and generate generalizable pseudo-ground-truth motion supervision signals. Contribution/Results: (1) We automatically construct a world-consistent, pseudo-metric long-term 4D point cloud and trajectory dataset; (2) we achieve zero-shot, cross-scene joint 3D structure and motion prediction on real stereo image pairs, significantly improving generalization over prior methods—without requiring any motion supervision or scene-specific fine-tuning.

Technology Category

Application Category

📝 Abstract
Learning to understand dynamic 3D scenes from imagery is crucial for applications ranging from robotics to scene reconstruction. Yet, unlike other problems where large-scale supervised training has enabled rapid progress, directly supervising methods for recovering 3D motion remains challenging due to the fundamental difficulty of obtaining ground truth annotations. We present a system for mining high-quality 4D reconstructions from internet stereoscopic, wide-angle videos. Our system fuses and filters the outputs of camera pose estimation, stereo depth estimation, and temporal tracking methods into high-quality dynamic 3D reconstructions. We use this method to generate large-scale data in the form of world-consistent, pseudo-metric 3D point clouds with long-term motion trajectories. We demonstrate the utility of this data by training a variant of DUSt3R to predict structure and 3D motion from real-world image pairs, showing that training on our reconstructed data enables generalization to diverse real-world scenes. Project page and data at: https://stereo4d.github.io
Problem

Research questions and friction points this paper is trying to address.

Recovering 3D motion from imagery lacks ground truth annotations
Mining high-quality 4D reconstructions from internet stereo videos
Generating pseudo-metric 3D point clouds with motion trajectories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses camera pose, stereo depth, and tracking
Generates pseudo-metric 3D point clouds
Trains DUSt3R for 3D motion prediction
🔎 Similar Papers
No similar papers found.