🤖 AI Summary
RGB-D SLAM faces critical bottlenecks in large-scale scenarios: GPU memory constraints and poor scalability of 3D Gaussian representations. To address these, we propose a view-bound 3D Gaussian representation—where Gaussian parameters (excluding position, rotation, and multi-dimensional covariance) are directly anchored to depth-image pixels—enabling lightweight, localized modeling. Our method integrates differentiable splatting rendering, incremental Gaussian management, and joint RGB-D optimization to significantly improve memory efficiency and geometric-photometric consistency. Evaluated on standard benchmarks, our approach surpasses state-of-the-art methods: it achieves a +2.1 dB PSNR gain and reduces absolute trajectory error (ATE) by 37%. It robustly handles long sequences (hundreds of frames) and enables dense reconstruction over square-kilometer-scale environments, delivering both high-precision pose estimation and strong scalability.
📝 Abstract
Jointly estimating camera poses and mapping scenes from RGBD images is a fundamental task in simultaneous localization and mapping (SLAM). State-of-the-art methods employ 3D Gaussians to represent a scene, and render these Gaussians through splatting for higher efficiency and better rendering. However, these methods cannot scale up to extremely large scenes, due to the inefficient tracking and mapping strategies that need to optimize all 3D Gaussians in the limited GPU memories throughout the training to maintain the geometry and color consistency to previous RGBD observations. To resolve this issue, we propose novel tracking and mapping strategies to work with a novel 3D representation, dubbed view-tied 3D Gaussians, for RGBD SLAM systems. View-tied 3D Gaussians is a kind of simplified Gaussians, which is tied to depth pixels, without needing to learn locations, rotations, and multi-dimensional variances. Tying Gaussians to views not only significantly saves storage but also allows us to employ many more Gaussians to represent local details in the limited GPU memory. Moreover, our strategies remove the need of maintaining all Gaussians learnable throughout the training, while improving rendering quality, and tracking accuracy. We justify the effectiveness of these designs, and report better performance over the latest methods on the widely used benchmarks in terms of rendering and tracking accuracy and scalability. Please see our project page for code and videos at https://machineperceptionlab.github.io/VTGaussian-SLAM-Project .