Learning Real-World Action-Video Dynamics with Heterogeneous Masked Autoregression

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inefficiency, poor generalization, and insufficient real-time performance in video dynamics modeling for robot learning, this paper proposes the Heterogeneous Masked Autoregressive (HMA) framework for interactive video world modeling. Methodologically, it introduces a novel heterogeneous pretraining paradigm integrating action-video data across diverse robot embodiments, tasks, and environments; and designs a lightweight masked autoregressive architecture that jointly models video prediction—via quantized or soft tokens—and low-level action-driven dynamics across domains. Key contributions include: (1) significantly improved visual fidelity and action controllability of generated videos; (2) a 15× speedup in inference latency, enabling real-time simulation; and (3) seamless applicability to cross-platform policy evaluation and high-fidelity synthetic data generation—establishing an efficient, general-purpose video world model infrastructure for robot learning.

Technology Category

Application Category

📝 Abstract
We propose Heterogeneous Masked Autoregression (HMA) for modeling action-video dynamics to generate high-quality data and evaluation in scaling robot learning. Building interactive video world models and policies for robotics is difficult due to the challenge of handling diverse settings while maintaining computational efficiency to run in real time. HMA uses heterogeneous pre-training from observations and action sequences across different robotic embodiments, domains, and tasks. HMA uses masked autoregression to generate quantized or soft tokens for video predictions. ourshort achieves better visual fidelity and controllability than the previous robotic video generation models with 15 times faster speed in the real world. After post-training, this model can be used as a video simulator from low-level action inputs for evaluating policies and generating synthetic data. See this link https://liruiw.github.io/hma for more information.
Problem

Research questions and friction points this paper is trying to address.

Modeling action-video dynamics for robotics.
Generating high-quality video simulations efficiently.
Improving visual fidelity and controllability in robotics.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heterogeneous Masked Autoregression for video dynamics
Pre-training across diverse robotic tasks
Real-time video generation with high fidelity
🔎 Similar Papers
No similar papers found.