Scaling Sim-to-Real Reinforcement Learning for Robot VLAs with Generative 3D Worlds

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited diversity of real-world scenes and objects that hinders effective fine-tuning of vision–language–action (VLA) models, often leading to policy overfitting, as well as the high cost of constructing traditional simulation environments that restricts reinforcement learning generalization. To overcome these challenges, the study introduces— for the first time—a generative 3D world model coupled with a language-driven scene designer to automatically synthesize hundreds of diverse, interactive simulated environments, enabling scalable and highly parallel reinforcement learning fine-tuning of VLA models. By integrating high-fidelity digital twins with domain randomization, the approach substantially improves sim-to-real transfer efficiency. Experiments demonstrate that task success rates in simulation increase from 9.7% to 79.8%, while real-world success rises from 21.7% to 75%, alongside markedly enhanced zero-shot generalization capabilities.

Technology Category

Application Category

📝 Abstract
The strong performance of large vision-language models (VLMs) trained with reinforcement learning (RL) has motivated similar approaches for fine-tuning vision-language-action (VLA) models in robotics. Many recent works fine-tune VLAs directly in the real world to avoid addressing the sim-to-real gap. While real-world RL circumvents sim-to-real issues, it inherently limits the generality of the resulting VLA, as scaling scene and object diversity in the physical world is prohibitively difficult. This leads to the paradoxical outcome of transforming a broadly pretrained model into an overfitted, scene-specific policy. Training in simulation can instead provide access to diverse scenes, but designing those scenes is also costly. In this work, we show that VLAs can be RL fine-tuned without sacrificing generality and with reduced labor by leveraging 3D world generative models. Using these models together with a language-driven scene designer, we generate hundreds of diverse interactive scenes containing unique objects and backgrounds, enabling scalable and highly parallel policy learning. Starting from a pretrained imitation baseline, our approach increases simulation success from 9.7% to 79.8% while achieving a 1.25$\times$ speedup in task completion time. We further demonstrate successful sim-to-real transfer enabled by the quality of the generated digital twins together with domain randomization, improving real-world success from 21.7% to 75% and achieving a 1.13$\times$ speedup. Finally, we further highlight the benefits of leveraging the effectively unlimited data from 3D world generative models through an ablation study showing that increasing scene diversity directly improves zero-shot generalization.
Problem

Research questions and friction points this paper is trying to address.

sim-to-real gap
vision-language-action models
scene diversity
reinforcement learning
robotics
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative 3D worlds
vision-language-action models
sim-to-real transfer
reinforcement learning
scene diversity
🔎 Similar Papers
No similar papers found.