EVA: Aligning Video World Models with Executable Robot Actions via Inverse Dynamics Rewards

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing video world models, which generate temporally coherent visual sequences but lack constraints ensuring robotic executability, often leading inverse dynamics models to decode actions that violate rigid-body or kinematic constraints. To bridge this executability gap, the authors propose the Executable Video Alignment (EVA) framework, which introduces a reinforcement learning reward mechanism based on an inverse dynamics model during the post-training phase of the video world model. EVA repurposes the inverse dynamics model as a reward function to explicitly align generated trajectories with physically feasible actions, optimizing velocity, acceleration, and jerk while penalizing behaviors that exceed embodiment constraints. Experiments on the RoboTwin benchmark and a real dual-arm robot demonstrate that EVA significantly reduces embodiment artifacts in generated rollouts and improves downstream task success rates.

Technology Category

Application Category

📝 Abstract
Video generative models are increasingly used as world models for robotics, where a model generates a future visual rollout conditioned on the current observation and task instruction, and an inverse dynamics model (IDM) converts the generated frames into executable robot actions. However, current video world models lack explicit executability constraints. As a result, visually coherent rollouts may still violate rigid-body and kinematic consistency, producing unstable or infeasible control commands when decoded by an IDM. We refer to this mismatch between visual generation and physically executable control as the executability gap. While this gap can be mitigated at inference time using techniques such as rejection sampling, such approaches are inefficient due to the high cost of video generation. In this paper, we leverage the executability gap as a training signal and introduce Executable Video Alignment (EVA), a reinforcement-learning post-training framework for aligning video world models. EVA trains an inverse dynamics model on real robot trajectories and repurposes it as a reward model that evaluates generated videos through the action sequences they induce, encouraging smooth motions measured by velocity, acceleration, and jerk while penalizing actions that violate embodiment constraints. Importantly, the reward remains informative even when generated videos contain severe visual artifacts, since such artifacts typically translate into unstable or out-of-bound actions. Experiments on the RoboTwin benchmark and a real bimanual robot show that EVA reduces embodiment-specific artifacts in generated rollouts and improves downstream task execution success.
Problem

Research questions and friction points this paper is trying to address.

executability gap
video world models
inverse dynamics model
robot actions
kinematic consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Executable Video Alignment
inverse dynamics model
world model alignment
executability gap
reinforcement learning
🔎 Similar Papers
No similar papers found.
R
Ruixiang Wang
The Chinese University of Hong Kong, Shenzhen; DexForce Technology Co., Ltd.
Q
Qingming Liu
The Chinese University of Hong Kong, Shenzhen
Y
Yueci Deng
The Chinese University of Hong Kong, Shenzhen; DexForce Technology Co., Ltd.
Guiliang Liu
Guiliang Liu
Chinese University of Hongkong, Shenzhen
Reinforcement LearningMachine Learning
Zhen Liu
Zhen Liu
Chinese University of Hong Kong (Shenzhen)
Machine LearningComputer Vision
K
Kui Jia
The Chinese University of Hong Kong, Shenzhen