🤖 AI Summary
This work addresses the high computational cost and slow inference speed of video generation strategies in robotic tasks by proposing a training-free inference paradigm for diffusion models. The method introduces, for the first time, a draft-and-target sampling mechanism into video generation, integrating self-play denoising, token chunking, and a progressive acceptance strategy to simultaneously generate global trajectories and verify fine-grained details within a single model. This parallelized approach substantially reduces redundant computation. Experimental results demonstrate that the proposed method achieves up to a 2.1× speedup on three robotic task benchmarks while maintaining near-identical task success rates, significantly enhancing inference efficiency without compromising performance.
📝 Abstract
Video generation models have been used as a robot policy to predict the future states of executing a task conditioned on task description and observation. Previous works ignore their high computational cost and long inference time. To address this challenge, we propose Draft-and-Target Sampling, a novel diffusion inference paradigm for video generation policy that is training-free and can improve inference efficiency. We introduce a self-play denoising approach by utilizing two complementary denoising trajectories in a single model, draft sampling takes large steps to generate a global trajectory in a fast manner and target sampling takes small steps to verify it. To further speedup generation, we introduce token chunking and progressive acceptance strategy to reduce redundant computation. Experiments on three benchmarks show that our method can achieve up to 2.1x speedup and improve the efficiency of current state-of-the-art methods with minimal compromise to the success rate. Our code is available.