Bridging Domain Gap for Flight-Ready Spaceborne Vision

📅 2024-09-18
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Accurate pose estimation of non-cooperative spacecraft using monocular vision onboard satellites remains challenging due to scarcity of real-space imagery, severe domain shift between synthetic and real data, and stringent constraints on computational resources of space-grade edge devices. Method: This paper proposes SPNv3, a lightweight neural network tailored for onboard deployment. It integrates synthetic-data-driven training, cross-domain adaptive augmentation, transfer learning, and a compact vision Transformer architecture, optimized via systematic trade-offs between computational efficiency and robustness. Contribution/Results: SPNv3 achieves, for the first time, strong generalization to hardware-in-the-loop real images—without any real-world annotations—when trained exclusively on synthetic data. On GPU platforms, it attains inference latency significantly lower than satellite navigation filter update rates while matching state-of-the-art pose accuracy. The model demonstrates superior robustness and minimal computational overhead, making it suitable for resource-constrained space applications.

Technology Category

Application Category

📝 Abstract
This work presents Spacecraft Pose Network v3 (SPNv3), a Neural Network (NN) for monocular pose estimation of a known, non-cooperative target spacecraft. As opposed to existing literature, SPNv3 is designed and trained to be computationally efficient while providing robustness to spaceborne images that have not been observed during offline training and validation on the ground. These characteristics are essential to deploying NNs on space-grade edge devices. They are achieved through careful NN design choices, and an extensive trade-off analysis reveals features such as data augmentation, transfer learning and vision transformer architecture as a few of those that contribute to simultaneously maximizing robustness and minimizing computational overhead. Experiments demonstrate that the final SPNv3 can achieve state-of-the-art pose accuracy on hardware-in-the-loop images from a robotic testbed while having trained exclusively on computer-generated synthetic images, effectively bridging the domain gap between synthetic and real imagery. At the same time, SPNv3 runs well above the update frequency of modern satellite navigation filters when tested on a representative graphical processing unit system with flight heritage. Overall, SPNv3 is an efficient, flight-ready NN model readily applicable to a wide range of close-range rendezvous and proximity operations with target resident space objects. The code implementation of SPNv3 will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Monocular pose estimation for non-cooperative spacecraft
Bridging domain gap between synthetic and real imagery
Enabling computationally efficient neural networks for space-grade hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Network for monocular pose estimation
Robustness through data augmentation and transfer learning
Vision transformer architecture minimizing computational overhead
🔎 Similar Papers