World Action Models are Zero-shot Policies

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes DreamZero, a novel vision-language-action framework that addresses the limited physical generalization of existing models to unseen actions in new environments. By leveraging a pretrained video diffusion backbone for world modeling, DreamZero jointly predicts future video frames and actions, enabling zero-shot policy control through learning physical dynamics and diverse skills from heterogeneous robot data. Requiring only 10–20 minutes of video demonstrations, it achieves cross-embodiment transfer and few-shot adaptation with just 30 minutes of play data while preserving zero-shot generalization. Built upon a 14B-parameter autoregressive video diffusion model and system-level optimizations, DreamZero supports real-time closed-loop control at 7 Hz. Real-robot experiments demonstrate over a 2× improvement in task and environment generalization over state-of-the-art methods, with cross-embodiment video demonstrations yielding more than a 42% performance gain on unseen tasks.

Technology Category

Application Category

📝 Abstract
State-of-the-art Vision-Language-Action (VLA) models excel at semantic generalization but struggle to generalize to unseen physical motions in novel environments. We introduce DreamZero, a World Action Model (WAM) built upon a pretrained video diffusion backbone. Unlike VLAs, WAMs learn physical dynamics by predicting future world states and actions, using video as a dense representation of how the world evolves. By jointly modeling video and action, DreamZero learns diverse skills effectively from heterogeneous robot data without relying on repetitive demonstrations. This results in over 2x improvement in generalization to new tasks and environments compared to state-of-the-art VLAs in real robot experiments. Crucially, through model and system optimizations, we enable a 14B autoregressive video diffusion model to perform real-time closed-loop control at 7Hz. Finally, we demonstrate two forms of cross-embodiment transfer: video-only demonstrations from other robots or humans yield a relative improvement of over 42% on unseen task performance with just 10-20 minutes of data. More surprisingly, DreamZero enables few-shot embodiment adaptation, transferring to a new embodiment with only 30 minutes of play data while retaining zero-shot generalization.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action models
physical generalization
unseen motions
novel environments
zero-shot policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

World Action Model
video diffusion
zero-shot policy
cross-embodiment transfer
real-time closed-loop control
🔎 Similar Papers
No similar papers found.