🤖 AI Summary
Real-time simultaneous localization and mapping (SLAM) is undergoing a paradigm shift driven by radiance field techniques—particularly Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS)—yet their integration into SLAM remains fragmented, with no unified analysis of capabilities, limitations, or deployment trade-offs.
Method: This work introduces the first comprehensive survey and unified analytical framework for radiance-field-driven SLAM, systematically comparing it against traditional geometric, learning-based, and implicit-representation SLAM methods across three dimensions: representational capacity, optimization mechanisms, and deployment constraints. It establishes a taxonomy covering 120+ works and conducts quantitative benchmarking on accuracy, efficiency, and dynamic-scene robustness.
Contribution/Results: The survey identifies fundamental bottlenecks—including training latency, dynamic object modeling, and edge-device compatibility—and provides the first holistic synthesis of this emerging field, serving as a theoretical roadmap and technical guide for end-to-end semantic SLAM development.
📝 Abstract
Over the past two decades, research in the field of Simultaneous Localization and Mapping (SLAM) has undergone a significant evolution, highlighting its critical role in enabling autonomous exploration of unknown environments. This evolution ranges from hand-crafted methods, through the era of deep learning, to more recent developments focused on Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS) representations. Recognizing the growing body of research and the absence of a comprehensive survey on the topic, this paper aims to provide the first comprehensive overview of SLAM progress through the lens of the latest advancements in radiance fields. It sheds light on the background, evolutionary path, inherent strengths and limitations, and serves as a fundamental reference to highlight the dynamic progress and specific challenges.