PDPP: Projected Diffusion for Procedure Planning in Instructional Videos

📅 2023-03-26
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 23
Influential: 8
📄 PDF
🤖 AI Summary
To address the reliance of procedural action planning in instructional videos on frame-level annotations or speech commands—rendering methods susceptible to error accumulation—this paper proposes an end-to-end diffusion model supervised solely by start/end-frame visual observations and task-level labels. The method frames action sequence generation as a conditional distribution fitting problem without intermediate supervision. Its core contributions include: (i) the first formulation of procedural action generation as an unsupervised latent sequence modeling task under coarse-grained visual and semantic constraints; and (ii) a differentiable projection-guidance mechanism that injects both visual and semantic priors seamlessly during both training and sampling, drastically reducing dependence on fine-grained annotations. Built upon a U-Net backbone, the model fuses start/end-frame visual embeddings with an efficient, differentiable projection operator. Evaluated on three diverse-scale instructional video datasets, it achieves state-of-the-art performance across multiple metrics, outperforming prior approaches while requiring no intermediate action annotations.
📝 Abstract
In this paper, we study the problem of procedure planning in instructional videos, which aims to make goal-directed plans given the current visual observations in unstructured real-life videos. Previous works cast this problem as a sequence planning problem and leverage either heavy intermediate visual observations or natural language instructions as supervision, resulting in complex learning schemes and expensive annotation costs. In contrast, we treat this problem as a distribution fitting problem. In this sense, we model the whole intermediate action sequence distribution with a diffusion model (PDPP), and thus transform the planning problem to a sampling process from this distribution. In addition, we remove the expensive intermediate supervision, and simply use task labels from instructional videos as supervision instead. Our model is a U-Net based diffusion model, which directly samples action sequences from the learned distribution with the given start and end observations. Furthermore, we apply an efficient projection method to provide accurate conditional guides for our model during the learning and sampling process. Experiments on three datasets with different scales show that our PDPP model can achieve the state-of-the-art performance on multiple metrics, even without the task supervision. Code and trained models are available at https://github.com/MCG-NJU/PDPP.
Problem

Research questions and friction points this paper is trying to address.

Automatic Action Planning
Teaching Videos
Error Accumulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

PDPP method
projection diffusion
sequence learning
🔎 Similar Papers
No similar papers found.
Hanlin Wang
Hanlin Wang
HKUST
Computer visionVideo understanding
Yilu Wu
Yilu Wu
Nanjing University
Computer Vision
Sheng Guo
Sheng Guo
Ant Group
Computer VisionDeep LearningLLM
L
Limin Wang
State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China; Shanghai AI Laboratory, Shanghai 200232, China