🤖 AI Summary
This work addresses the ambiguity in procedural planning arising from visually similar actions when relying solely on visual observations. To resolve this, the authors propose leveraging language descriptions as a core intermediate representation. They introduce, for the first time, a fine-tuned vision-language model (VLM) to translate visual observations into structured textual descriptions, which are then used to generate action sequences via a diffusion model conditioned on text embeddings. By aligning visual and linguistic modalities, this approach enhances semantic discriminability and significantly improves the accuracy of predicted action sequences. The method achieves new state-of-the-art performance across three benchmark datasets—CrossTask, COIN, and NIV—outperforming existing approaches by substantial margins on multiple evaluation metrics.
📝 Abstract
Procedure planning requires a model to predict a sequence of actions that transform a start visual observation into a goal in instructional videos. While most existing methods rely primarily on visual observations as input, they often struggle with the inherent ambiguity where different actions can appear visually similar. In this work, we argue that language descriptions offer a more distinctive representation in the latent space for procedure planning. We introduce Language-Aware Planning (LAP), a novel method that leverages the expressiveness of language to bridge visual observation and planning. LAP uses a finetuned Vision Language Model (VLM) to translate visual observations into text descriptions and to predict actions and extract text embeddings. These text embeddings are more distinctive than visual embeddings and are used in a diffusion model for planning action sequences. We evaluate LAP on three procedure planning benchmarks: CrossTask, Coin, and NIV. LAP achieves new state-of-the-art performance across multiple metrics and time horizons by large margin, demonstrating the significant advantage of language-aware planning.