🤖 AI Summary
This work addresses three key challenges in embodied intelligence: scarcity of action annotations, weak cross-modal coupling, and difficulty in transferring pre-trained video knowledge. To this end, we propose a text–image–joint-state-driven framework for joint video and action generation. Methodologically, we introduce a novel bridging attention mechanism to dynamically align visual, linguistic, and proprioceptive modalities, and design a parallel action diffusion branch that refines action trajectories at fine granularity—without unfreezing the pre-trained video diffusion backbone. This architecture circumvents both two-stage decoupling and single-modality transfer bottlenecks. Evaluated on multiple public and real-robot datasets, our method significantly improves spatiotemporal coherence of generated videos and precision of action trajectories. It establishes a scalable, video-driven paradigm for large-scale robotic policy learning.
📝 Abstract
We present a method to generate video-action pairs that follow text instructions, starting from an initial image observation and the robot's joint states. Our approach automatically provides action labels for video diffusion models, overcoming the common lack of action annotations and enabling their full use for robotic policy learning. Existing methods either adopt two-stage pipelines, which limit tightly coupled cross-modal information sharing, or rely on adapting a single-modal diffusion model for a joint distribution that cannot fully leverage pretrained video knowledge. To overcome these limitations, we (1) extend a pretrained video diffusion model with a parallel, dedicated action diffusion model that preserves pretrained knowledge, (2) introduce a Bridge Attention mechanism to enable effective cross-modal interaction, and (3) design an action refinement module to convert coarse actions into precise controls for low-resolution datasets. Extensive evaluations on multiple public benchmarks and real-world datasets demonstrate that our method generates higher-quality videos, more accurate actions, and significantly outperforms existing baselines, offering a scalable framework for leveraging large-scale video data for robotic learning.