🤖 AI Summary
To address error accumulation and spatiotemporal feature misalignment in long-horizon autonomous driving video generation for world modeling, this paper proposes STAGE, a streaming autoregressive generative framework. Built upon diffusion models, STAGE enables infinite-length, temporally coherent, and high-fidelity video synthesis. Its core innovations are: (1) Hierarchical Temporal Feature Transfer (HTFT), the first mechanism to explicitly decouple spatiotemporal modeling from denoising; and (2) a three-stage decoupled training strategy that enhances cross-frame feature propagation stability. Evaluated on NuScenes, STAGE significantly outperforms prior methods, achieving—for the first time—continuous generation of 600-frame high-quality driving videos. This establishes a scalable, principled generative paradigm for long-horizon world models in autonomous driving.
📝 Abstract
The generation of temporally consistent, high-fidelity driving videos over extended horizons presents a fundamental challenge in autonomous driving world modeling. Existing approaches often suffer from error accumulation and feature misalignment due to inadequate decoupling of spatio-temporal dynamics and limited cross-frame feature propagation mechanisms. To address these limitations, we present STAGE (Streaming Temporal Attention Generative Engine), a novel auto-regressive framework that pioneers hierarchical feature coordination and multi-phase optimization for sustainable video synthesis. To achieve high-quality long-horizon driving video generation, we introduce Hierarchical Temporal Feature Transfer (HTFT) and a novel multi-stage training strategy. HTFT enhances temporal consistency between video frames throughout the video generation process by modeling the temporal and denoising process separately and transferring denoising features between frames. The multi-stage training strategy is to divide the training into three stages, through model decoupling and auto-regressive inference process simulation, thereby accelerating model convergence and reducing error accumulation. Experiments on the Nuscenes dataset show that STAGE has significantly surpassed existing methods in the long-horizon driving video generation task. In addition, we also explored STAGE's ability to generate unlimited-length driving videos. We generated 600 frames of high-quality driving videos on the Nuscenes dataset, which far exceeds the maximum length achievable by existing methods.