🤖 AI Summary
Existing autoregressive video generation models suffer from limitations of conventional tokenizers—including inadequate modeling of global semantics, fixed codebook sizes, and misalignment between reconstruction and generation objectives. This paper introduces LARP, a novel video tokenizer that addresses these issues through three key innovations: (i) a globally learnable query mechanism that explicitly captures video-level semantic context; (ii) integration of a lightweight autoregressive Transformer as a prior into the VQ-VAE training pipeline, enabling end-to-end joint optimization for both reconstruction fidelity and generative capability; and (iii) explicit ordering constraints on discrete token sequences to enhance autoregressive decoding fluency and spatiotemporal coherence. Evaluated on class-conditional generation using UCF101, LARP achieves state-of-the-art Fréchet Video Distance (FVD). This work establishes a new paradigm for high-fidelity, task-adaptive video discretization, advancing unified multimodal foundation models for both video understanding and generation.
📝 Abstract
We present LARP, a novel video tokenizer designed to overcome limitations in current video tokenization methods for autoregressive (AR) generative models. Unlike traditional patchwise tokenizers that directly encode local visual patches into discrete tokens, LARP introduces a holistic tokenization scheme that gathers information from the visual content using a set of learned holistic queries. This design allows LARP to capture more global and semantic representations, rather than being limited to local patch-level information. Furthermore, it offers flexibility by supporting an arbitrary number of discrete tokens, enabling adaptive and efficient tokenization based on the specific requirements of the task. To align the discrete token space with downstream AR generation tasks, LARP integrates a lightweight AR transformer as a training-time prior model that predicts the next token on its discrete latent space. By incorporating the prior model during training, LARP learns a latent space that is not only optimized for video reconstruction but is also structured in a way that is more conducive to autoregressive generation. Moreover, this process defines a sequential order for the discrete tokens, progressively pushing them toward an optimal configuration during training, ensuring smoother and more accurate AR generation at inference time. Comprehensive experiments demonstrate LARP's strong performance, achieving state-of-the-art FVD on the UCF101 class-conditional video generation benchmark. LARP enhances the compatibility of AR models with videos and opens up the potential to build unified high-fidelity multimodal large language models (MLLMs).