ARDuP: Active Region Video Diffusion for Universal Policies

📅 2024-06-19
🏛️ IEEE/RJS International Conference on Intelligent RObots and Systems
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of automatically identifying critical interaction regions in video-driven decision-making—traditionally reliant on labor-intensive manual annotations. We propose an active region-conditioned, text-guided video diffusion framework. Methodologically, it integrates latent-space video diffusion models with text-video joint embeddings; an optical-flow-driven unsupervised active region extraction module localizes interaction regions without supervision, while inverse dynamics modeling decodes generated videos into executable control actions, enabling end-to-end generalizable policy learning. Our key contribution is the first integration of interaction region discovery directly into the video diffusion process—eliminating the need for human annotations and significantly enhancing policy focus on semantically salient regions. Evaluated on the CLIPort simulation environment and the BridgeData v2 real-world dataset, our approach achieves state-of-the-art performance in task success rate, semantic plausibility of video plans, and temporal fidelity.

Technology Category

Application Category

📝 Abstract
Sequential decision-making can be formulated as a text-conditioned video generation problem, where a video planner, guided by a text-defined goal, generates future frames visualizing planned actions, from which control actions are subsequently derived. In this work, we introduce Active Region Video Diffusion for Universal Policies (ARDuP), a novel framework for video-based policy learning that emphasizes the generation of active regions, i.e. potential interaction areas, enhancing the conditional policy’s focus on interactive areas critical for task execution. This innovative framework integrates active region conditioning with latent diffusion models for video planning and employs latent representations for direct action decoding during inverse dynamic modeling. By utilizing motion cues in videos for automatic active region discovery, our method eliminates the need for manual annotations of active regions. We validate ARDuP’s efficacy via extensive experiments on simulator CLIPort and the real-world dataset BridgeData v2, achieving notable improvements in success rates and generating convincingly realistic video plans.
Problem

Research questions and friction points this paper is trying to address.

Video Generation
Decision Making
Interactive Region Recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

ARDuP
Interactive Region Detection
Video Prediction
🔎 Similar Papers
No similar papers found.