🤖 AI Summary
To address the challenge of transferring locomotion policies across diverse legged robot morphologies and tasks, this paper proposes a latent-space-to-latent-space transferable gait policy framework. Methodologically, it pretrains a unified latent policy coupled with lightweight task-specific encoder-decoder modules; introduces a diffusion-based recovery module to preserve latent representation integrity; and adopts an efficient adaptation paradigm that freezes the pretrained latent policy while fine-tuning only the encoders and decoders. The key contribution is the first general-purpose latent policy architecture supporting cross-morphology (quadrupedal, bipedal, hexapedal) and cross-task (walking, slope climbing, obstacle traversal) transfer, enhanced by diffusion modeling to improve latent-space fidelity. Experiments demonstrate zero-shot transfer in both simulation and real-world robotic platforms; a 70% reduction in required fine-tuning samples; and threefold acceleration in convergence speed.
📝 Abstract
Reinforcement learning (RL) has demonstrated remarkable capability in acquiring robot skills, but learning each new skill still requires substantial data collection for training. The pretrain-and-finetune paradigm offers a promising approach for efficiently adapting to new robot entities and tasks. Inspired by the idea that acquired knowledge can accelerate learning new tasks with the same robot and help a new robot master a trained task, we propose a latent training framework where a transferable latent-to-latent locomotion policy is pretrained alongside diverse task-specific observation encoders and action decoders. This policy in latent space processes encoded latent observations to generate latent actions to be decoded, with the potential to learn general abstract motion skills. To retain essential information for decision-making and control, we introduce a diffusion recovery module that minimizes information reconstruction loss during pretrain stage. During fine-tune stage, the pretrained latent-to-latent locomotion policy remains fixed, while only the lightweight task-specific encoder and decoder are optimized for efficient adaptation. Our method allows a robot to leverage its own prior experience across different tasks as well as the experience of other morphologically diverse robots to accelerate adaptation. We validate our approach through extensive simulations and real-world experiments, demonstrating that the pretrained latent-to-latent locomotion policy effectively generalizes to new robot entities and tasks with improved efficiency.