🤖 AI Summary
To address the challenges of joint modeling of video generation and action prediction in robotics—namely, high inference latency, weak multi-task generalization, and difficulty in cross-modal alignment—this paper proposes the first video-action joint latent space model. Methodologically: (1) it constructs a shared latent representation enabling cross-modal alignment between video frames and action sequences; (2) it introduces two lightweight diffusion heads for decoupled decoding, separately optimizing video reconstruction and action prediction to eliminate generative latency; and (3) it employs masked input joint training to unify support for policy learning, forward/inverse dynamics modeling, and video prediction. Experiments demonstrate that this single unified model matches or surpasses task-specific models across multiple robotic benchmarks: action accuracy improves by 12.6%, and inference speed is 3.8× faster than video-generation baselines. The model thus achieves both strong generalization and real-time capability.
📝 Abstract
A unified video and action model holds significant promise for robotics, where videos provide rich scene information for action prediction, and actions provide dynamics information for video prediction. However, effectively combining video generation and action prediction remains challenging, and current video generation-based methods struggle to match the performance of direct policy learning in action accuracy and inference speed.To bridge this gap, we introduce the Unified Video Action model (UVA), which jointly optimizes video and action predictions to achieve both high accuracy and efficient action inference. The key lies in learning a joint video-action latent representation and decoupling video-action decoding. The joint latent representation bridges the visual and action domains, effectively modeling the relationship between video and action sequences. Meanwhile, the decoupled decoding, powered by two lightweight diffusion heads, enables high-speed action inference by bypassing video generation during inference. Such a unified framework further enables versatile functionality through masked input training. By selectively masking actions or videos, a single model can tackle diverse tasks beyond policy learning, such as forward and inverse dynamics modeling and video generation. Via an extensive set of experiments, we demonstrate that UVA can serve as a general-purpose solution for a wide range of robotics tasks, such as policy learning, forward/inverse dynamics and video observation prediction, without compromising performance compared to methods tailored for specific applications. Results are best viewed on https://unified-video-action-model.github.io/.