🤖 AI Summary
This work addresses the challenge of long-horizon visual robotic planning, where existing diffusion models—though effective for short-horizon tasks—struggle to scale due to high computational costs, limited training data, and global inconsistency arising from invalid decomposition assumptions in noisy latent spaces. The authors formulate long-horizon planning as inference over a chain-structured factor graph of overlapping video segments. Leveraging a pretrained short-horizon video diffusion model as a local prior, they enforce boundary consistency constraints on Tweedie estimates (i.e., denoised predictions) and coordinate global coherence through synchronous and asynchronous message-passing mechanisms during inference. Notably, the approach requires no additional training or fine-tuning and achieves, for the first time, stable and composable long-horizon visual planning. It significantly outperforms existing baselines on unseen start–goal state pairs and generalizes effectively to out-of-distribution long-horizon tasks.
📝 Abstract
Diffusion models excel at short-horizon robot planning, yet scaling them to long-horizon tasks remains challenging due to computational constraints and limited training data. Existing compositional approaches stitch together short segments by separately denoising each component and averaging overlapping regions. However, this suffers from instability as the factorization assumption breaks down in noisy data space, leading to inconsistent global plans. We propose that the key to stable compositional generation lies in enforcing boundary agreement on the estimated clean data (Tweedie estimates) rather than on noisy intermediate states. Our method formulates long-horizon planning as inference over a chain-structured factor graph of overlapping video chunks, where pretrained short-horizon video diffusion models provide local priors. At inference time, we enforce boundary agreement through a novel combination of synchronous and asynchronous message passing that operates on Tweedie estimates, producing globally consistent guidance without requiring additional training. Our training-free framework demonstrates significant improvements over existing baselines, effectively generalizing to unseen start-goal combinations that were not present in the original training data. Project website: https://comp-visual-planning.github.io/