🤖 AI Summary
To address the inefficiency and error accumulation inherent in pairwise matching and iterative alignment in large-scale multi-view 3D reconstruction, this paper proposes the first end-to-end multi-view extension architecture, completely abandoning DUSt3R’s pairwise paradigm and post-hoc global alignment. Methodologically, we design a Transformer-based joint encoder for multiple images, incorporating cross-view attention and learnable geometric priors to enable simultaneous feature alignment and joint regression of depth and camera pose for arbitrary numbers of input views. Our approach achieves state-of-the-art accuracy across multiple benchmarks, reducing pose estimation error by 32% and eliminating cumulative drift. Moreover, inference speed improves by over an order of magnitude: reconstruction of thousand-image scenes requires only a single forward pass. This substantially enhances practicality and scalability for large-scale scene reconstruction.
📝 Abstract
Multi-view 3D reconstruction remains a core challenge in computer vision, particularly in applications requiring accurate and scalable representations across diverse perspectives. Current leading methods such as DUSt3R employ a fundamentally pairwise approach, processing images in pairs and necessitating costly global alignment procedures to reconstruct from multiple views. In this work, we propose Fast 3D Reconstruction (Fast3R), a novel multi-view generalization to DUSt3R that achieves efficient and scalable 3D reconstruction by processing many views in parallel. Fast3R's Transformer-based architecture forwards N images in a single forward pass, bypassing the need for iterative alignment. Through extensive experiments on camera pose estimation and 3D reconstruction, Fast3R demonstrates state-of-the-art performance, with significant improvements in inference speed and reduced error accumulation. These results establish Fast3R as a robust alternative for multi-view applications, offering enhanced scalability without compromising reconstruction accuracy.