Autoregressive Video Generation without Vector Quantization

πŸ“… 2024-12-18
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the inefficiency and poor visual fidelity of autoregressive video generation models by proposing NOVAβ€”the first end-to-end autoregressive video generation framework (0.6B parameters) that eliminates vector quantization. Methodologically, NOVA jointly models inter-frame temporal causal prediction and intra-frame spatial set prediction: it employs GPT-style unidirectional temporal attention to ensure strict causality and introduces bidirectional intra-frame attention to enhance spatial coherence. Crucially, it abandons conventional VQ-based tokenization, enabling direct autoregressive modeling in the native pixel space. Experiments demonstrate that NOVA surpasses existing autoregressive methods across data efficiency, inference speed, motion smoothness, and visual quality. Moreover, its text-to-image generation performance matches leading diffusion models, while requiring significantly lower training costs. NOVA also supports long-video synthesis and unifies zero-shot multi-task capabilities within a single architecture.

Technology Category

Application Category

πŸ“ Abstract
This paper presents a novel approach that enables autoregressive video generation with high efficiency. We propose to reformulate the video generation problem as a non-quantized autoregressive modeling of temporal frame-by-frame prediction and spatial set-by-set prediction. Unlike raster-scan prediction in prior autoregressive models or joint distribution modeling of fixed-length tokens in diffusion models, our approach maintains the causal property of GPT-style models for flexible in-context capabilities, while leveraging bidirectional modeling within individual frames for efficiency. With the proposed approach, we train a novel video autoregressive model without vector quantization, termed NOVA. Our results demonstrate that NOVA surpasses prior autoregressive video models in data efficiency, inference speed, visual fidelity, and video fluency, even with a much smaller model capacity, i.e., 0.6B parameters. NOVA also outperforms state-of-the-art image diffusion models in text-to-image generation tasks, with a significantly lower training cost. Additionally, NOVA generalizes well across extended video durations and enables diverse zero-shot applications in one unified model. Code and models are publicly available at https://github.com/baaivision/NOVA.
Problem

Research questions and friction points this paper is trying to address.

Efficient autoregressive video generation without vector quantization.
Improves data efficiency, inference speed, and visual fidelity in video models.
Outperforms state-of-the-art models in text-to-image generation with lower cost.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-quantized autoregressive modeling for video generation
Bidirectional modeling within individual frames for efficiency
Unified model for diverse zero-shot applications
πŸ”Ž Similar Papers
No similar papers found.