HI-SLAM2: Geometry-Aware Gaussian SLAM for Fast Monocular Scene Reconstruction

📅 2024-11-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the longstanding challenge in monocular neural SLAM of simultaneously achieving high rendering fidelity and geometric accuracy. We propose a fast, high-precision scene reconstruction method that operates solely on RGB video input. Our approach integrates monocular depth priors with learning-based dense SLAM, adopting 3D Gaussian Splatting (3DGS) as the underlying map representation. To the best of our knowledge, this is the first monocular RGB-only framework enabling concurrent high-fidelity novel-view synthesis and metrically accurate geometric reconstruction. We introduce two key innovations: (i) anchor keyframe-driven Gaussian deformation updates, and (ii) mesh-guided scale alignment, significantly enhancing depth detail preservation and global scale consistency. Furthermore, the system supports real-time pose-graph optimization and dynamic Gaussian refinement under loop closure. Extensive evaluation on Replica, ScanNet, and ScanNet++ demonstrates that our method outperforms all existing neural SLAM approaches—achieving both superior reconstruction accuracy and rendering quality, even surpassing RGB-D-based baselines.

Technology Category

Application Category

📝 Abstract
We present HI-SLAM2, a geometry-aware Gaussian SLAM system that achieves fast and accurate monocular scene reconstruction using only RGB input. Existing Neural SLAM or 3DGS-based SLAM methods often trade off between rendering quality and geometry accuracy, our research demonstrates that both can be achieved simultaneously with RGB input alone. The key idea of our approach is to enhance the ability for geometry estimation by combining easy-to-obtain monocular priors with learning-based dense SLAM, and then using 3D Gaussian splatting as our core map representation to efficiently model the scene. Upon loop closure, our method ensures on-the-fly global consistency through efficient pose graph bundle adjustment and instant map updates by explicitly deforming the 3D Gaussian units based on anchored keyframe updates. Furthermore, we introduce a grid-based scale alignment strategy to maintain improved scale consistency in prior depths for finer depth details. Through extensive experiments on Replica, ScanNet, and ScanNet++, we demonstrate significant improvements over existing Neural SLAM methods and even surpass RGB-D-based methods in both reconstruction and rendering quality. The project page and source code will be made available at https://hi-slam2.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Achieves fast monocular scene reconstruction with RGB input
Balances rendering quality and geometry accuracy simultaneously
Ensures global consistency via efficient pose graph adjustment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines monocular priors with learning-based SLAM
Uses 3D Gaussian splatting for scene representation
Ensures global consistency via pose graph adjustment
🔎 Similar Papers
No similar papers found.
W
Wei Zhang
Institute for Photogrammetry and Geoinformatics, University of Stuttgart, Germany
Q
Qing Cheng
Technical University of Munich, Germany
D
D. Skuddis
Institute for Photogrammetry and Geoinformatics, University of Stuttgart, Germany
Niclas Zeller
Niclas Zeller
Karlsruhe University of Applied Sciences
Computer VisionSLAM3D ReconstructionLight Field Imaging
Daniel Cremers
Daniel Cremers
Technical University of Munich
Computer VisionMachine LearningOptimizationRobotics
N
Norbert Haala
Institute for Photogrammetry and Geoinformatics, University of Stuttgart, Germany