π€ AI Summary
Real-world robot learning is hindered by high interaction costs, scarce expert demonstrations, and the sim-to-real gap. This work proposes a novel approach that fine-tunes vision-language-action (VLA) policies via reinforcement learning within an action-conditional video world model trained on real videoβaction data, leveraging a vision-language model to automatically evaluate rollout outcomes and provide reward signals. This method enables, for the first time, efficient RL fine-tuning inside a world model, supporting multi-instruction following, generalization to novel scenes, test-time adaptation, and online co-optimization of the policy and world model. Experiments on the Bridge platform demonstrate up to an 18Γ improvement over supervised fine-tuning and up to a 2Γ gain over conventional simulation-based approaches, substantially enhancing task performance in real-world environments.
π Abstract
Robot learning from interacting with the physical world is fundamentally bottlenecked by the cost of physical interaction. The two alternatives, supervised finetuning (SFT) from expert demonstrations and reinforcement learning (RL) in a software-based simulator, are limited by the amount of expert data available and the sim-to-real gap for manipulation. With the recent emergence of world models learned from real-world video-action data, we ask the question of whether training a policy in a world model can be more effective than supervised learning or software simulation in achieving better real-robot performance. We propose World-Gymnast, which performs RL finetuning of a vision-language-action (VLA) policy by rolling out the policy in an action-conditioned video world model and rewarding the rollouts with a vision-language model (VLM). On the Bridge robot setup, World-Gymnast outperforms SFT by as much as 18x and outperforms software simulator by as much as 2x. More importantly, World-Gymnast demonstrates intriguing capabilities of RL with a world model, including training on diverse language instructions and novel scenes from the world model, test-time training in a novel scene, and online iterative world model and policy improvement. Our results suggest learning a world model and training robot policies in the cloud could be the key to bridging the gap between robots that work in demonstrations and robots that can work in anyone's household.