🤖 AI Summary
Multi-view 3D reconstruction is fundamentally constrained by the limited scale and diversity of real-world training data. To address this, we propose Puzzles—a self-supervised data augmentation framework that generates unlimited high-fidelity video-depth sequences from a single image or short video, requiring no additional annotations. Puzzles synthesizes diverse camera trajectories via geometrically consistent image transformations, then jointly performs novel-view synthesis and depth estimation to produce pose-accurate, geometry-consistent synthetic video sequences. Crucially, it enables construction of large-scale, highly diverse training sets using only a small fraction (e.g., 10%) of real data—without modifying existing reconstruction architectures, thus maintaining full compatibility with state-of-the-art frameworks such as DUST3R. Experiments demonstrate that models trained with Puzzles match the reconstruction accuracy of those trained on full datasets, while achieving substantial gains in generalization and training efficiency.
📝 Abstract
Multi-view 3D reconstruction remains a core challenge in computer vision. Recent methods, such as DUST3R and its successors, directly regress pointmaps from image pairs without relying on known scene geometry or camera parameters. However, the performance of these models is constrained by the diversity and scale of available training data. In this work, we introduce Puzzles, a data augmentation strategy that synthesizes an unbounded volume of high-quality posed video-depth data from a single image or video clip. By simulating diverse camera trajectories and realistic scene geometry through targeted image transformations, Puzzles significantly enhances data variety. Extensive experiments show that integrating Puzzles into existing video-based 3D reconstruction pipelines consistently boosts performance without modifying the underlying network architecture. Notably, models trained on only ten percent of the original data augmented with Puzzles still achieve accuracy comparable to those trained on the full dataset. Code is available at https://jiahao-ma.github.io/puzzles/.