🤖 AI Summary
Existing vision-language-action models rely on static image pretraining, which struggles to capture physical dynamics and thus limits policy generalization and data efficiency. This work proposes an end-to-end video-action joint model that, for the first time, leverages intermediate denoising features from a video diffusion Transformer as temporal conditioning for an action diffusion Transformer, enabling direct transfer from implicit dynamics representations to action reasoning. A novel dual-stream matching objective is introduced, decoupling timesteps and noise scales to jointly optimize video prediction and action decision-making. The method achieves success rates of 98.6% on LIBERO and 50.8% on RoboCasa GR1, demonstrating over 10× improvement in sample efficiency and 7× faster convergence, while exhibiting strong zero-shot generalization on the Unitree G1 robot.
📝 Abstract
Vision-Language-Action (VLA) models have emerged as a promising paradigm for robot learning, but their representations are still largely inherited from static image-text pretraining, leaving physical dynamics to be learned from comparatively limited action data. Generative video models, by contrast, encode rich spatiotemporal structure and implicit physics, making them a compelling foundation for robotic manipulation. But their potentials are not fully explored in the literature. To bridge the gap, we introduce DiT4DiT, an end-to-end Video-Action Model that couples a video Diffusion Transformer with an action Diffusion Transformer in a unified cascaded framework. Instead of relying on reconstructed future frames, DiT4DiT extracts intermediate denoising features from the video generation process and uses them as temporally grounded conditions for action prediction. We further propose a dual flow-matching objective with decoupled timesteps and noise scales for video prediction, hidden-state extraction, and action inference, enabling coherent joint training of both modules. Across simulation and real-world benchmarks, DiT4DiT achieves state-of-the-art results, reaching average success rates of 98.6% on LIBERO and 50.8% on RoboCasa GR1 while using substantially less training data. On the Unitree G1 robot, it also delivers superior real-world performance and strong zero-shot generalization. Importantly, DiT4DiT improves sample efficiency by over 10x and speeds up convergence by up to 7x, demonstrating that video generation can serve as an effective scaling proxy for robot policy learning. We release code and models at https://dit4dit.github.io/.