🤖 AI Summary
Expert demonstration data for on-orbit rendezvous and docking is extremely scarce (only 100 trajectories), severely limiting imitation learning performance in guidance, navigation, and control (GNC). Method: This paper proposes an image-driven GNC framework based on Action Chunking Transformer (ACT), which jointly processes visual and state observations to enable end-to-end mapping from high-dimensional inputs to continuous thrust/torque commands; a temporal action chunking mechanism is introduced to enhance policy smoothness, consistency, and generalization. Contribution/Results: Evaluated on an International Space Station (ISS) docking simulation, the method achieves superior performance—exceeding a meta-reinforcement learning baseline trained for 40 million environment steps—using only 6,300 interactions. It demonstrates significant improvements in docking accuracy, control smoothness, and sample efficiency, thereby validating the feasibility and advancement of few-shot imitation learning for aerospace GNC applications.
📝 Abstract
We present an imitation learning approach for spacecraft guidance, navigation, and control(GNC) that achieves high performance from limited data. Using only 100 expert demonstrations, equivalent to 6,300 environment interactions, our method, which implements Action Chunking with Transformers (ACT), learns a control policy that maps visual and state observations to thrust and torque commands. ACT generates smoother, more consistent trajectories than a meta-reinforcement learning (meta-RL) baseline trained with 40 million interactions. We evaluate ACT on a rendezvous task: in-orbit docking with the International Space Station (ISS). We show that our approach achieves greater accuracy, smoother control, and greater sample efficiency.