🤖 AI Summary
DUSt3R’s pairwise image limitation incurs quadratic computational complexity and degraded robustness in large-scale multi-view reconstruction. To address this, we propose MV-DUSt3R—the first end-to-end multi-view dense stereo reconstruction framework. Our method introduces: (1) a symmetric Transformer-based multi-view architecture that generalizes beyond image pairs; (2) a hierarchical learnable memory aggregation mechanism, reducing complexity from O(N²) to near-linear and enabling efficient joint optimization over thousands of images; and (3) dense 3D structure regression in a unified coordinate system—without camera calibration or pose priors. Evaluated on unsupervised visual odometry, relative pose estimation, focal length and scale recovery, and multi-view depth estimation, MV-DUSt3R achieves state-of-the-art performance. It further demonstrates real-time capability and robustness in both offline Structure-from-Motion (SfM) and online visual SLAM (vSLAM) settings.
📝 Abstract
DUSt3R introduced a novel paradigm in geometric computer vision by proposing a model that can provide dense and unconstrained Stereo 3D Reconstruction of arbitrary image collections with no prior information about camera calibration nor viewpoint poses. Under the hood, however, DUSt3R processes image pairs, regressing local 3D reconstructions that need to be aligned in a global coordinate system. The number of pairs, growing quadratically, is an inherent limitation that becomes especially concerning for robust and fast optimization in the case of large image collections. In this paper, we propose an extension of DUSt3R from pairs to multiple views, that addresses all aforementioned concerns. Indeed, we propose a Multi-view Network for Stereo 3D Reconstruction, or MUSt3R, that modifies the DUSt3R architecture by making it symmetric and extending it to directly predict 3D structure for all views in a common coordinate frame. Second, we entail the model with a multi-layer memory mechanism which allows to reduce the computational complexity and to scale the reconstruction to large collections, inferring thousands of 3D pointmaps at high frame-rates with limited added complexity. The framework is designed to perform 3D reconstruction both offline and online, and hence can be seamlessly applied to SfM and visual SLAM scenarios showing state-of-the-art performance on various 3D downstream tasks, including uncalibrated Visual Odometry, relative camera pose, scale and focal estimation, 3D reconstruction and multi-view depth estimation.