🤖 AI Summary
This work addresses the challenge of high-fidelity video generation by proposing Latte—the first latent diffusion Transformer model specifically designed for video generation. Methodologically, Latte models video distributions in the latent space, employing spatiotemporally decoupled patch embeddings, learnable temporal positional encodings, dynamic timestep-class conditioning, and a contrastive learning strategy to jointly support video-to-video and text-to-video generation. Key contributions include: (i) the first integration of a fully Transformer-based architecture into the video diffusion framework; (ii) the introduction of four efficient spatiotemporal decoupling variants; and (iii) the systematic establishment of best practices for video patch embedding and conditional injection. Experiments demonstrate that Latte achieves state-of-the-art performance on four major benchmarks—FaceForensics, SkyTimelapse, UCF101, and Taichi-HD—and matches or exceeds leading text-to-video (T2V) models in text-conditioned video synthesis.
📝 Abstract
We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.