🤖 AI Summary
This work addresses the degradation of loop closure detection performance in visual SLAM under drastic viewpoint and illumination changes by integrating the deep learning-based AnyLoc visual place recognition method into the DPV-SLAM framework. Replacing traditional handcrafted features with deep features, the proposed approach further introduces a self-adaptive similarity threshold mechanism that eliminates the need for manual parameter tuning. Experimental results demonstrate that this method significantly enhances the accuracy and robustness of loop closure detection, consistently outperforming the original DPV-SLAM system across multiple indoor and outdoor datasets. These findings validate the effectiveness and generalization capability of the proposed approach in complex and challenging environments.
📝 Abstract
Loop closure is crucial for maintaining the accuracy and consistency of visual SLAM. We propose a method to improve loop closure performance in DPV-SLAM. Our approach integrates AnyLoc, a learning-based visual place recognition technique, as a replacement for the classical Bag of Visual Words (BoVW) loop detection method. In contrast to BoVW, which relies on handcrafted features, AnyLoc utilizes deep feature representations, enabling more robust image retrieval across diverse viewpoints and lighting conditions. Furthermore, we propose an adaptive mechanism that dynamically adjusts similarity threshold based on environmental conditions, removing the need for manual tuning. Experiments on both indoor and outdoor datasets demonstrate that our method significantly outperforms the original DPV-SLAM in terms of loop closure accuracy and robustness. The proposed method offers a practical and scalable solution for enhancing loop closure performance in modern SLAM systems.