🤖 AI Summary
To address the high computational cost and trade-off between real-time performance and reconstruction quality in dynamic 3D scene updating, this paper proposes an incremental Gaussian Splatting-based reconstruction method. Our approach tackles three key challenges: (1) a robust change detection network that explicitly decouples dynamic and static scene components; (2) sparse scene sampling coupled with localized gradient optimization, enabling efficient partial retraining by updating only affected regions while preserving historical state; and (3) temporally consistent, high-fidelity spatiotemporal modeling. Experiments demonstrate significant improvements over state-of-the-art methods—achieving higher reconstruction accuracy (PSNR/SSIM) and 2.3× faster update speed. The method provides a lightweight, reliable foundation for real-time 3D scene updating, with direct applicability to robot navigation, mixed reality, and embodied AI systems.
📝 Abstract
In dynamic 3D environments, accurately updating scene representations over time is crucial for applications in robotics, mixed reality, and embodied AI. As scenes evolve, efficient methods to incorporate changes are needed to maintain up-to-date, high-quality reconstructions without the computational overhead of re-optimizing the entire scene. This paper introduces CL-Splats, which incrementally updates Gaussian splatting-based 3D representations from sparse scene captures. CL-Splats integrates a robust change-detection module that segments updated and static components within the scene, enabling focused, local optimization that avoids unnecessary re-computation. Moreover, CL-Splats supports storing and recovering previous scene states, facilitating temporal segmentation and new scene-analysis applications. Our extensive experiments demonstrate that CL-Splats achieves efficient updates with improved reconstruction quality over the state-of-the-art. This establishes a robust foundation for future real-time adaptation in 3D scene reconstruction tasks.