🤖 AI Summary
This work addresses the challenge of cross-view localization between monocular RGB images and local aerial maps in planetary exploration missions, where performance is hindered by scarce real-world annotations and significant domain discrepancies. To bridge the gap between real and synthetic imagery, we propose a dual-encoder deep network that uniquely integrates vision foundation model–guided semantic segmentation with a synthetic data–driven domain generalization strategy. Robust localization is further achieved through particle filter–based fusion of sequential observations. We also introduce the first cross-view planetary analog dataset comprising aligned real-synthetic image pairs. Experimental results demonstrate that our method achieves high-precision localization across complex real-world trajectories, confirming its effectiveness and generalization capability in planetary analog environments.
📝 Abstract
Accurate localisation in planetary robotics enables the advanced autonomy required to support the increased scale and scope of future missions. The successes of the Ingenuity helicopter and multiple planetary orbiters lay the groundwork for future missions that use ground-aerial robotic teams. In this paper, we consider rovers using machine learning to localise themselves in a local aerial map using limited field-of-view monocular ground-view RGB images as input. A key consideration for machine learning methods is that real space data with ground-truth position labels suitable for training is scarce. In this work, we propose a novel method of localising rovers in an aerial map using cross-view-localising dual-encoder deep neural networks. We leverage semantic segmentation with vision foundation models and high volume synthetic data to bridge the domain gap to real images. We also contribute a new cross-view dataset of real-world rover trajectories with corresponding ground-truth localisation data captured in a planetary analogue facility, plus a high volume dataset of analogous synthetic image pairs. Using particle filters for state estimation with the cross-view networks allows accurate position estimation over simple and complex trajectories based on sequences of ground-view images.