🤖 AI Summary
This work addresses the challenge of online pre-training data collection for zero-shot reinforcement learning on real quadrupedal robots without prior task knowledge, aiming to achieve both high diversity and relevance in collected behaviors. The authors propose FB-MEBE, a novel algorithm that, for the first time, integrates maximum entropy behavioral exploration into an online zero-shot reinforcement learning framework. By combining entropy-maximizing behavior distribution with a regularized critic mechanism, FB-MEBE guides the policy toward learning natural and physically feasible locomotion skills. Built upon a Forward-Backward optimization scheme, the method outperforms existing exploration strategies across multiple simulated downstream tasks and successfully enables zero-shot Sim2Real transfer—deploying directly on a physical quadruped robot without any fine-tuning.
📝 Abstract
Zero-shot reinforcement learning (RL) algorithms aim to learn a family of policies from a reward-free dataset, and recover optimal policies for any reward function directly at test time. Naturally, the quality of the pretraining dataset determines the performance of the recovered policies across tasks. However, pre-collecting a relevant, diverse dataset without prior knowledge of the downstream tasks of interest remains a challenge. In this work, we study $\textit{online}$ zero-shot RL for quadrupedal control on real robotic systems, building upon the Forward-Backward (FB) algorithm. We observe that undirected exploration yields low-diversity data, leading to poor downstream performance and rendering policies impractical for direct hardware deployment. Therefore, we introduce FB-MEBE, an online zero-shot RL algorithm that combines an unsupervised behavior exploration strategy with a regularization critic. FB-MEBE promotes exploration by maximizing the entropy of the achieved behavior distribution. Additionally, a regularization critic shapes the recovered policies toward more natural and physically plausible behaviors. We empirically demonstrate that FB-MEBE achieves and improved performance compared to other exploration strategies in a range of simulated downstream tasks, and that it renders natural policies that can be seamlessly deployed to hardware without further finetuning. Videos and code available on our website.