DynaGSLAM: Real-Time Gaussian-Splatting SLAM for Online Rendering, Tracking, Motion Predictions of Moving Objects in Dynamic Scenes

📅 2025-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of moving object interference in Gaussian Splatting SLAM (GS-SLAM) under dynamic scenes. Methodologically, it introduces the first real-time dynamic GS-SLAM system supporting online high-fidelity rendering, camera pose tracking, and motion prediction. It pioneers joint modeling of static and dynamic components within GS-SLAM—abandoning the conventional static-scene assumption—and employs explicit 3D Gaussian representations integrated with photometric consistency optimization, dynamic segmentation, motion prior modeling, and a lightweight bundle adjustment for end-to-end dynamic object tracking and motion forecasting. Evaluated on three real-world dynamic datasets, the system runs at over 20 FPS with memory efficiency, achieving PSNR gains of 2.1–3.8 dB over state-of-the-art static and “anti-dynamic” baselines. This advances the practical deployment of GS-SLAM in realistic dynamic environments.

Technology Category

Application Category

📝 Abstract
Simultaneous Localization and Mapping (SLAM) is one of the most important environment-perception and navigation algorithms for computer vision, robotics, and autonomous cars/drones. Hence, high quality and fast mapping becomes a fundamental problem. With the advent of 3D Gaussian Splatting (3DGS) as an explicit representation with excellent rendering quality and speed, state-of-the-art (SOTA) works introduce GS to SLAM. Compared to classical pointcloud-SLAM, GS-SLAM generates photometric information by learning from input camera views and synthesize unseen views with high-quality textures. However, these GS-SLAM fail when moving objects occupy the scene that violate the static assumption of bundle adjustment. The failed updates of moving GS affects the static GS and contaminates the full map over long frames. Although some efforts have been made by concurrent works to consider moving objects for GS-SLAM, they simply detect and remove the moving regions from GS rendering ("anti'' dynamic GS-SLAM), where only the static background could benefit from GS. To this end, we propose the first real-time GS-SLAM,"DynaGSLAM'', that achieves high-quality online GS rendering, tracking, motion predictions of moving objects in dynamic scenes while jointly estimating accurate ego motion. Our DynaGSLAM outperforms SOTA static&"Anti'' dynamic GS-SLAM on three dynamic real datasets, while keeping speed and memory efficiency in practice.
Problem

Research questions and friction points this paper is trying to address.

Real-time SLAM with dynamic object handling
High-quality rendering and motion prediction
Accurate ego-motion estimation in dynamic scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time Gaussian-Splatting SLAM for dynamic scenes
High-quality online rendering and motion prediction
Accurate ego motion estimation with moving objects
🔎 Similar Papers
No similar papers found.
Runfa Blark Li
Runfa Blark Li
UC San Diego
Computer Vision/GraphicsRL/MLEmbodiment/Robotics
Mahdi Shaghaghi
Mahdi Shaghaghi
Qualcomm XR Advanced Technology
Keito Suzuki
Keito Suzuki
University of California, San Diego
Computer VisionDeep Learning
X
Xinshuang Liu
UC San Diego
V
Varun Moparthi
UC San Diego
Bang Du
Bang Du
University of California San Diego
W
Walker Curtis
Qualcomm XR Advanced Technology
M
Martin Renschler
Qualcomm XR Advanced Technology
Ki Myung Brian Lee
Ki Myung Brian Lee
University of California, San Diego
Robotics
N
Nikolay Atanasov
UC San Diego
T
Truong Nguyen
UC San Diego