π€ AI Summary
Real-time, high-quality dense 3D reconstruction from monocular RGB video remains challenging due to the need for accurate camera pose estimation and global geometric consistency.
Method: This paper proposes an end-to-end, pose-free paradigm: video is processed in sliding windows; a feed-forward neural network directly regresses local point clouds; and a differentiable deformation alignment module enables progressive geometric consistency optimization, implicitly unifying the global coordinate system. The approach eliminates explicit camera pose estimation, iterative optimization, and hand-crafted geometric constraints typical in SLAM, instead jointly learning local reconstruction and global registration.
Contribution/Results: Our method achieves state-of-the-art accuracy and completeness on multiple standard benchmarks while running at over 20 FPSβmarking the first real-time dense reconstruction framework that operates without explicit pose solving.
π Abstract
In this paper, we introduce SLAM3R, a novel and effective system for real-time, high-quality, dense 3D reconstruction using RGB videos. SLAM3R provides an end-to-end solution by seamlessly integrating local 3D reconstruction and global coordinate registration through feed-forward neural networks. Given an input video, the system first converts it into overlapping clips using a sliding window mechanism. Unlike traditional pose optimization-based methods, SLAM3R directly regresses 3D pointmaps from RGB images in each window and progressively aligns and deforms these local pointmaps to create a globally consistent scene reconstruction - all without explicitly solving any camera parameters. Experiments across datasets consistently show that SLAM3R achieves state-of-the-art reconstruction accuracy and completeness while maintaining real-time performance at 20+ FPS. Code available at: https://github.com/PKU-VCL-3DV/SLAM3R.