🤖 AI Summary
To address the insufficient robustness of visual SLAM in challenging scenarios—such as low-texture environments, motion blur, and complex illumination—which critically impairs localization accuracy and tracking stability in vision-impaired assistive navigation, this paper proposes a deep learning–enhanced robust SLAM front-end. Methodologically, it replaces conventional hand-crafted features with SuperPoint for keypoint detection and LightGlue for feature matching, substantially improving feature repeatability and matching reliability under extreme conditions. The framework is seamlessly integrated into an RGB-D SLAM system. Evaluated on TUM RGB-D, ICL-NUIM, and TartanAir benchmarks, it achieves an average absolute pose error 87.84% lower than ORB-SLAM3 and outperforms existing RGB-D SLAM methods by 36.77%. This work establishes a novel, high-precision, and highly adaptive perception paradigm suitable for resource-constrained real-time assistive navigation systems.
📝 Abstract
Despite advancements in SLAM technologies, robust operation under challenging conditions such as low-texture, motion-blur, or challenging lighting remains an open challenge. Such conditions are common in applications such as assistive navigation for the visually impaired. These challenges undermine localization accuracy and tracking stability, reducing navigation reliability and safety. To overcome these limitations, we present SELM-SLAM3, a deep learning-enhanced visual SLAM framework that integrates SuperPoint and LightGlue for robust feature extraction and matching. We evaluated our framework using TUM RGB-D, ICL-NUIM, and TartanAir datasets, which feature diverse and challenging scenarios. SELM-SLAM3 outperforms conventional ORB-SLAM3 by an average of 87.84% and exceeds state-of-the-art RGB-D SLAM systems by 36.77%. Our framework demonstrates enhanced performance under challenging conditions, such as low-texture scenes and fast motion, providing a reliable platform for developing navigation aids for the visually impaired.