🤖 AI Summary
To address error accumulation in autoregressive modeling for long-horizon robotic manipulation video generation, this paper proposes a non-autoregressive framework. First, high-level tasks are decomposed into atomic subtasks, and semantically consistent keyframes are generated. Then, a diffusion model interpolates between these keyframes to synthesize long video sequences. A semantic-preserving attention mechanism is introduced to ensure cross-frame semantic coherence, and a lightweight video-to-joint-state policy regression model is designed for end-to-end controllable execution. Evaluated on two benchmarks, the method significantly improves video fidelity, task consistency, and executability. It achieves state-of-the-art performance in both video generation quality and policy transfer capability.
📝 Abstract
We address the problem of generating long-horizon videos for robotic manipulation tasks. Text-to-video diffusion models have made significant progress in photorealism, language understanding, and motion generation but struggle with long-horizon robotic tasks. Recent works use video diffusion models for high-quality simulation data and predictive rollouts in robot planning. However, these works predict short sequences of the robot achieving one task and employ an autoregressive paradigm to extend to the long horizon, leading to error accumulations in the generated video and in the execution. To overcome these limitations, we propose a novel pipeline that bypasses the need for autoregressive generation. We achieve this through a threefold contribution: 1) we first decompose the high-level goals into smaller atomic tasks and generate keyframes aligned with these instructions. A second diffusion model then interpolates between each of the two generated frames, achieving the long-horizon video. 2) We propose a semantics preserving attention module to maintain consistency between the keyframes. 3) We design a lightweight policy model to regress the robot joint states from generated videos. Our approach achieves state-of-the-art results on two benchmarks in video quality and consistency while outperforming previous policy models on long-horizon tasks.