Vision Foundation Models for Domain Generalisable Cross-View Localisation in Planetary Ground-Aerial Robotic Teams

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of cross-view localization between monocular RGB images and local aerial maps in planetary exploration missions, where performance is hindered by scarce real-world annotations and significant domain discrepancies. To bridge the gap between real and synthetic imagery, we propose a dual-encoder deep network that uniquely integrates vision foundation model–guided semantic segmentation with a synthetic data–driven domain generalization strategy. Robust localization is further achieved through particle filter–based fusion of sequential observations. We also introduce the first cross-view planetary analog dataset comprising aligned real-synthetic image pairs. Experimental results demonstrate that our method achieves high-precision localization across complex real-world trajectories, confirming its effectiveness and generalization capability in planetary analog environments.

Technology Category

Application Category

📝 Abstract
Accurate localisation in planetary robotics enables the advanced autonomy required to support the increased scale and scope of future missions. The successes of the Ingenuity helicopter and multiple planetary orbiters lay the groundwork for future missions that use ground-aerial robotic teams. In this paper, we consider rovers using machine learning to localise themselves in a local aerial map using limited field-of-view monocular ground-view RGB images as input. A key consideration for machine learning methods is that real space data with ground-truth position labels suitable for training is scarce. In this work, we propose a novel method of localising rovers in an aerial map using cross-view-localising dual-encoder deep neural networks. We leverage semantic segmentation with vision foundation models and high volume synthetic data to bridge the domain gap to real images. We also contribute a new cross-view dataset of real-world rover trajectories with corresponding ground-truth localisation data captured in a planetary analogue facility, plus a high volume dataset of analogous synthetic image pairs. Using particle filters for state estimation with the cross-view networks allows accurate position estimation over simple and complex trajectories based on sequences of ground-view images.
Problem

Research questions and friction points this paper is trying to address.

cross-view localisation
domain generalisation
planetary robotics
vision foundation models
ground-aerial robotic teams
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Foundation Models
Cross-View Localisation
Domain Generalisation
Synthetic-to-Real Transfer
Dual-Encoder Neural Networks
🔎 Similar Papers
No similar papers found.
L
Lachlan Holden
AI for Space Group and Andy Thomas Centre for Space Resources, The University of Adelaide
Feras Dayoub
Feras Dayoub
The University of Adelaide - Australian Institute for Machine Learning (AIML)
Mobile RoboticsRobotic VisionRobot LearningField RoboticsChronorobotics
A
Alberto Candela
Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA
D
David Harvey
AI for Space Group and Andy Thomas Centre for Space Resources, The University of Adelaide
Tat-Jun Chin
Tat-Jun Chin
SmartSat CRC Professorial Chair of Sentient Satellites, The University of Adelaide
Computer VisionMachine LearningArtificial IntelligenceSpace