🤖 AI Summary
To address the challenge of global localization and online 6-DoF pose estimation for ground robots in GPS-denied environments such as dense forests, this paper proposes a cross-view (aerial–ground) factor graph joint optimization framework. The method integrates visual–inertial tightly coupled mapping, multi-view geometric constraints, and a deep learning–driven relocalization module, enabling, for the first time, unified modeling and co-optimization of aerial imagery and ground-sensor data within a single factor graph. By incorporating aerial–ground view consistency priors and a robust outlier suppression mechanism, the approach significantly improves pose estimation accuracy and relocalization robustness under complex forest-canopy conditions. Experimental results demonstrate drift-free localization beneath dense canopies, bounded position error, a 42% reduction in absolute trajectory error compared to state-of-the-art methods, and a relocalization success rate of 98.7%.
📝 Abstract
This paper presents a novel approach for robust global localisation and 6DoF pose estimation of ground robots in forest environments by leveraging cross-view factor graph optimisation and deep-learned re-localisation. The proposed method addresses the challenges of aligning aerial and ground data for pose estimation, which is crucial for accurate point-to-point navigation in GPS-denied environments. By integrating information from both perspectives into a factor graph framework, our approach effectively estimates the robot's global position and orientation. We validate the performance of our method through extensive experiments in diverse forest scenarios, demonstrating its superiority over existing baselines in terms of accuracy and robustness in these challenging environments. Experimental results show that our proposed localisation system can achieve drift-free localisation with bounded positioning errors, ensuring reliable and safe robot navigation under canopies.