π€ AI Summary
Existing neural radiance field (NeRF)-based SLAM approaches rely on a monolithic volumetric representation, suffering from inefficient loop-closure integration and poor scalability. To address this, we propose Neural Graph Mappingβa novel framework that anchors lightweight MLP-based implicit fields to nodes of a sparse pose graph constructed by visual SLAM, enabling joint optimization of neural fields and pose graph. This paradigm supports loop-closure-driven local re-integration and incremental field updates, overcoming fundamental limitations of global neural fields in loop consistency and large-scale mapping. Evaluated on building-scale scenes with multiple loops, our method achieves superior reconstruction accuracy, real-time performance, and global consistency compared to state-of-the-art methods. It significantly enhances scalability and practicality for large-scale dense 3D reconstruction.
π Abstract
Existing neural field-based SLAM methods typically employ a single monolithic field as their scene representation. This prevents efficient incorporation of loop closure constraints and limits scalability. To address these shortcomings, we propose a neural mapping framework which anchors lightweight neural fields to the pose graph of a sparse visual SLAM system. Our approach shows the ability to integrate large-scale loop closures, while limiting necessary reintegration. Furthermore, we verify the scalability of our approach by demonstrating successful building-scale mapping taking multiple loop closures into account during the optimization, and show that our method outperforms existing state-of-the-art approaches on large scenes in terms of quality and runtime. Our code is available at https://kth-rpl.github.io/neural_graph_mapping/.