VideoAR: Autoregressive Video Generation via Next-Frame & Scale Prediction

๐Ÿ“… 2026-01-09
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing video generation methods suffer from significant limitations in computational efficiency and long-term temporal consistency. This work proposes VideoAR, the first large-scale visual autoregressive framework for video generation, which decouples spatial and temporal dependencies by integrating multi-scale next-frame prediction with autoregressive modeling. Key innovations include a 3D multi-scale tokenizer, multi-scale temporal RoPE positional encoding, a cross-frame error correction mechanism, and a stochastic frame masking strategy, complemented by a three-stage progressive pretraining scheme to enhance spatiotemporal modeling. Experiments demonstrate that VideoAR reduces the Frรฉchet Video Distance (FVD) on UCF-101 from 99.5 to 88.6, decreases inference steps by over an order of magnitude, and achieves a VBench score of 81.74โ€”matching the performance of diffusion models an order of magnitude larger in scale.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in video generation have been dominated by diffusion and flow-matching models, which produce high-quality results but remain computationally intensive and difficult to scale. In this work, we introduce VideoAR, the first large-scale Visual Autoregressive (VAR) framework for video generation that combines multi-scale next-frame prediction with autoregressive modeling. VideoAR disentangles spatial and temporal dependencies by integrating intra-frame VAR modeling with causal next-frame prediction, supported by a 3D multi-scale tokenizer that efficiently encodes spatio-temporal dynamics. To improve long-term consistency, we propose Multi-scale Temporal RoPE, Cross-Frame Error Correction, and Random Frame Mask, which collectively mitigate error propagation and stabilize temporal coherence. Our multi-stage pretraining pipeline progressively aligns spatial and temporal learning across increasing resolutions and durations. Empirically, VideoAR achieves new state-of-the-art results among autoregressive models, improving FVD on UCF-101 from 99.5 to 88.6 while reducing inference steps by over 10x, and reaching a VBench score of 81.74-competitive with diffusion-based models an order of magnitude larger. These results demonstrate that VideoAR narrows the performance gap between autoregressive and diffusion paradigms, offering a scalable, efficient, and temporally consistent foundation for future video generation research.
Problem

Research questions and friction points this paper is trying to address.

video generation
autoregressive modeling
temporal consistency
computational efficiency
error propagation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Autoregressive
Multi-scale Tokenizer
Temporal Coherence
Next-Frame Prediction
Efficient Video Generation
๐Ÿ”Ž Similar Papers
No similar papers found.