🤖 AI Summary
This work addresses the gap between large-scale pretraining and efficient, safe fine-tuning in humanoid robot control by proposing a novel approach that combines high-update-to-data (UTD) ratio off-policy Soft Actor-Critic (SAC) for large-scale pretraining, enabling zero-shot deployment on real hardware for the first time. During fine-tuning, a physics-informed world model is introduced to facilitate stochastic exploration, thereby improving sample efficiency, while deterministic policies are executed in the real environment to ensure safety. This framework significantly enhances the adaptability and sample efficiency of humanoid robots when operating in novel environments or tackling out-of-distribution tasks.
📝 Abstract
Reinforcement learning (RL) is widely used for humanoid control, with on-policy methods such as Proximal Policy Optimization (PPO) enabling robust training via large-scale parallel simulation and, in some cases, zero-shot deployment to real robots. However, the low sample efficiency of on-policy algorithms limits safe adaptation to new environments. Although off-policy RL and model-based RL have shown improved sample efficiency, the gap between large-scale pretraining and efficient finetuning on humanoids still exists. In this paper, we find that off-policy Soft Actor-Critic (SAC), with large-batch update and a high Update-To-Data (UTD) ratio, reliably supports large-scale pretraining of humanoid locomotion policies, achieving zero-shot deployment on real robots. For adaptation, we demonstrate that these SAC-pretrained policies can be finetuned in new environments and out-of-distribution tasks using model-based methods. Data collection in the new environment executes a deterministic policy while stochastic exploration is instead confined to a physics-informed world model. This separation mitigates the risks of random exploration during adaptation while preserving exploratory coverage for improvement. Overall, the approach couples the wall-clock efficiency of large-scale simulation during pretraining with the sample efficiency of model-based learning during fine-tuning.