EGS-SLAM: RGB-D Gaussian Splatting SLAM with Events

📅 2025-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GS-SLAM methods suffer from tracking drift and geometric degradation under sustained, severe motion blur. To address this, we propose the first robust GS-SLAM framework integrating event cameras with RGB-D data. Our method (1) explicitly models the continuous camera trajectory during exposure to decouple motion blur; (2) introduces a learnable event-to-image dynamic range alignment response function to enhance cross-modal consistency; and (3) designs an event-free supervised reconstruction loss to effectively suppress artifacts in Gaussian splatting. Evaluated on both synthetic and real-world blurred scenes, our approach significantly outperforms state-of-the-art GS-SLAM methods: pose estimation error is reduced by up to 42%, while geometric accuracy and texture fidelity of reconstructions are substantially improved. Crucially, it maintains stable tracking and high-fidelity 3D reconstruction even under extreme motion blur.

Technology Category

Application Category

📝 Abstract
Gaussian Splatting SLAM (GS-SLAM) offers a notable improvement over traditional SLAM methods, enabling photorealistic 3D reconstruction that conventional approaches often struggle to achieve. However, existing GS-SLAM systems perform poorly under persistent and severe motion blur commonly encountered in real-world scenarios, leading to significantly degraded tracking accuracy and compromised 3D reconstruction quality. To address this limitation, we propose EGS-SLAM, a novel GS-SLAM framework that fuses event data with RGB-D inputs to simultaneously reduce motion blur in images and compensate for the sparse and discrete nature of event streams, enabling robust tracking and high-fidelity 3D Gaussian Splatting reconstruction. Specifically, our system explicitly models the camera's continuous trajectory during exposure, supporting event- and blur-aware tracking and mapping on a unified 3D Gaussian Splatting scene. Furthermore, we introduce a learnable camera response function to align the dynamic ranges of events and images, along with a no-event loss to suppress ringing artifacts during reconstruction. We validate our approach on a new dataset comprising synthetic and real-world sequences with significant motion blur. Extensive experimental results demonstrate that EGS-SLAM consistently outperforms existing GS-SLAM systems in both trajectory accuracy and photorealistic 3D Gaussian Splatting reconstruction. The source code will be available at https://github.com/Chensiyu00/EGS-SLAM.
Problem

Research questions and friction points this paper is trying to address.

Improving SLAM accuracy under severe motion blur
Fusing event data with RGB-D for robust tracking
Enhancing 3D reconstruction quality in dynamic scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses event data with RGB-D inputs
Models camera's continuous trajectory exposure
Introduces learnable camera response function
🔎 Similar Papers
No similar papers found.
S
Siyu Chen
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
S
Shenghai Yuan
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Thien-Minh Nguyen
Thien-Minh Nguyen
Research Asst Prof, NTU Singapore | Lecturer - The University of Queensland (incoming)
Robot Perception and NavigationCooperative RoboticsRobot Learning
Z
Zhuyu Huang
School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
Chenyang Shi
Chenyang Shi
Beihang University
Neuromorphic computing
J
Jin Jing
School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
Lihua Xie
Lihua Xie
Professor of Electrical Engineering, Nanyang Technological University
Robust controlNetworked ControlMult-agent Systems