π€ AI Summary
This work addresses the inefficiency and poor visual fidelity of autoregressive video generation models by proposing NOVAβthe first end-to-end autoregressive video generation framework (0.6B parameters) that eliminates vector quantization. Methodologically, NOVA jointly models inter-frame temporal causal prediction and intra-frame spatial set prediction: it employs GPT-style unidirectional temporal attention to ensure strict causality and introduces bidirectional intra-frame attention to enhance spatial coherence. Crucially, it abandons conventional VQ-based tokenization, enabling direct autoregressive modeling in the native pixel space. Experiments demonstrate that NOVA surpasses existing autoregressive methods across data efficiency, inference speed, motion smoothness, and visual quality. Moreover, its text-to-image generation performance matches leading diffusion models, while requiring significantly lower training costs. NOVA also supports long-video synthesis and unifies zero-shot multi-task capabilities within a single architecture.
π Abstract
This paper presents a novel approach that enables autoregressive video generation with high efficiency. We propose to reformulate the video generation problem as a non-quantized autoregressive modeling of temporal frame-by-frame prediction and spatial set-by-set prediction. Unlike raster-scan prediction in prior autoregressive models or joint distribution modeling of fixed-length tokens in diffusion models, our approach maintains the causal property of GPT-style models for flexible in-context capabilities, while leveraging bidirectional modeling within individual frames for efficiency. With the proposed approach, we train a novel video autoregressive model without vector quantization, termed NOVA. Our results demonstrate that NOVA surpasses prior autoregressive video models in data efficiency, inference speed, visual fidelity, and video fluency, even with a much smaller model capacity, i.e., 0.6B parameters. NOVA also outperforms state-of-the-art image diffusion models in text-to-image generation tasks, with a significantly lower training cost. Additionally, NOVA generalizes well across extended video durations and enables diverse zero-shot applications in one unified model. Code and models are publicly available at https://github.com/baaivision/NOVA.