Loong: Generating Minute-level Long Videos with Autoregressive Language Models

📅 2024-10-03
🏛️ arXiv.org
📈 Citations: 19
Influential: 0
📄 PDF
🤖 AI Summary
Existing autoregressive large language models (LLMs) for video generation are severely limited in duration, typically producing only a few seconds of video, and thus fail to capture temporal consistency and fine-grained motion coherence required for minute-long videos. To address this, we propose the first autoregressive long-video generation framework based on a unified text-video token sequence. Our approach comprises three key innovations: (1) constructing a cross-modal unified tokenization space to jointly model text and video; (2) introducing a progressive short-to-long training strategy with dynamic loss reweighting to alleviate long-range dependency modeling challenges; and (3) incorporating dynamic video token recoding and optimized sampling (temperature scaling + top-k) to suppress error accumulation during autoregressive inference. Remarkably, trained solely on 10-second video clips, our model generates high-fidelity, semantically aligned, and temporally consistent 60-second videos—significantly outperforming existing baselines in text-video alignment and motion coherence.

Technology Category

Application Category

📝 Abstract
It is desirable but challenging to generate content-rich long videos in the scale of minutes. Autoregressive large language models (LLMs) have achieved great success in generating coherent and long sequences of tokens in the domain of natural language processing, while the exploration of autoregressive LLMs for video generation is limited to generating short videos of several seconds. In this work, we conduct a deep analysis of the challenges that prevent autoregressive LLM-based video generators from generating long videos. Based on the observations and analysis, we propose Loong, a new autoregressive LLM-based video generator that can generate minute-long videos. Specifically, we model the text tokens and video tokens as a unified sequence for autoregressive LLMs and train the model from scratch. We propose progressive short-to-long training with a loss re-weighting scheme to mitigate the loss imbalance problem for long video training. We further investigate inference strategies, including video token re-encoding and sampling strategies, to diminish error accumulation during inference. Our proposed Loong can be trained on 10-second videos and be extended to generate minute-level long videos conditioned on text prompts, as demonstrated by the results. More samples are available at: https://yuqingwang1029.github.io/Loong-video.
Problem

Research questions and friction points this paper is trying to address.

Generating minute-long videos using autoregressive LLMs
Overcoming loss imbalance in long video training
Reducing error accumulation during video inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified sequence modeling of text and video tokens
Progressive short-to-long training with loss re-weighting
Inference strategies to reduce error accumulation
🔎 Similar Papers
No similar papers found.