π€ AI Summary
Current video world models face two key challenges in long-horizon embodied manipulation video generation: diffusion-based approaches suffer from temporal inconsistency and visual drift, while autoregressive models compromise pixel-level fidelity. To address this, we propose BlockARβa hybrid architecture that employs diffusion modeling within blocks and autoregression across blocks. BlockAR introduces two core innovations: (1) an action-aware, variable-length semantic chunking mechanism that aligns each video block with a complete manipulation primitive; and (2) a context-aware Mixture-of-Experts (MoE) framework that dynamically activates specialized experts per block. This design jointly ensures temporal coherence and high-fidelity visual reconstruction. Experiments on multi-minute robotic manipulation rollouts demonstrate that BlockAR significantly outperforms state-of-the-art methods, achieving consistent improvements in visual quality, action-logical consistency, and temporal stability.
π Abstract
Video-based world models hold significant potential for generating high-quality embodied manipulation data. However, current video generation methods struggle to achieve stable long-horizon generation: classical diffusion-based approaches often suffer from temporal inconsistency and visual drift over multiple rollouts, while autoregressive methods tend to compromise on visual detail. To solve this, we introduce LongScape, a hybrid framework that adaptively combines intra-chunk diffusion denoising with inter-chunk autoregressive causal generation. Our core innovation is an action-guided, variable-length chunking mechanism that partitions video based on the semantic context of robotic actions. This ensures each chunk represents a complete, coherent action, enabling the model to flexibly generate diverse dynamics. We further introduce a Context-aware Mixture-of-Experts (CMoE) framework that adaptively activates specialized experts for each chunk during generation, guaranteeing high visual quality and seamless chunk transitions. Extensive experimental results demonstrate that our method achieves stable and consistent long-horizon generation over extended rollouts. Our code is available at: https://github.com/tsinghua-fib-lab/Longscape.