CoVAR: Co-generation of Video and Action for Robotic Manipulation via Multi-Modal Diffusion

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three key challenges in embodied intelligence: scarcity of action annotations, weak cross-modal coupling, and difficulty in transferring pre-trained video knowledge. To this end, we propose a text–image–joint-state-driven framework for joint video and action generation. Methodologically, we introduce a novel bridging attention mechanism to dynamically align visual, linguistic, and proprioceptive modalities, and design a parallel action diffusion branch that refines action trajectories at fine granularity—without unfreezing the pre-trained video diffusion backbone. This architecture circumvents both two-stage decoupling and single-modality transfer bottlenecks. Evaluated on multiple public and real-robot datasets, our method significantly improves spatiotemporal coherence of generated videos and precision of action trajectories. It establishes a scalable, video-driven paradigm for large-scale robotic policy learning.

Technology Category

Application Category

📝 Abstract
We present a method to generate video-action pairs that follow text instructions, starting from an initial image observation and the robot's joint states. Our approach automatically provides action labels for video diffusion models, overcoming the common lack of action annotations and enabling their full use for robotic policy learning. Existing methods either adopt two-stage pipelines, which limit tightly coupled cross-modal information sharing, or rely on adapting a single-modal diffusion model for a joint distribution that cannot fully leverage pretrained video knowledge. To overcome these limitations, we (1) extend a pretrained video diffusion model with a parallel, dedicated action diffusion model that preserves pretrained knowledge, (2) introduce a Bridge Attention mechanism to enable effective cross-modal interaction, and (3) design an action refinement module to convert coarse actions into precise controls for low-resolution datasets. Extensive evaluations on multiple public benchmarks and real-world datasets demonstrate that our method generates higher-quality videos, more accurate actions, and significantly outperforms existing baselines, offering a scalable framework for leveraging large-scale video data for robotic learning.
Problem

Research questions and friction points this paper is trying to address.

Generates video-action pairs from text and initial robot states
Overcomes lack of action annotations for robotic policy learning
Enables cross-modal interaction while preserving pretrained video knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel video and action diffusion models preserve pretrained knowledge
Bridge Attention mechanism enables effective cross-modal interaction
Action refinement module converts coarse actions into precise controls
🔎 Similar Papers
No similar papers found.
L
Liudi Yang
University of Freiburg
Y
Yang Bai
Ludwig Maximilian University of Munich
George Eskandar
George Eskandar
University of Stuttgart
Computer VisionDomain AdaptationGenerative AIAutonomous Driving3D Reconsruction
F
Fengyi Shen
Technical University of Munich
M
Mohammad Altillawi
Huawei Heisenberg Research Center (Munich)
D
Dong Chen
Huawei Heisenberg Research Center (Munich)
Ziyuan Liu
Ziyuan Liu
Unknown affiliation
RoboticsManipulation and GraspingComputer VisionMachine Learning
Abhinav Valada
Abhinav Valada
Professor & Director of Robot Learning Lab, University of Freiburg
RoboticsMachine LearningComputer VisionArtificial Intelligence