๐ค AI Summary
This work addresses the challenge of jointly optimizing generation quality, inference efficiency, and deployment flexibility for high-resolution image and 10-second video synthesis. We propose a unified family of foundation models trained via a multi-stage framework: large-scale pretraining on 6Bโ19B-parameter architectures, followed by semantic-aware data clustering and filtering, self-supervised fine-tuning, and reinforcement learningโbased post-training. This enables joint modeling and accelerated inference for both modalities. The model family comprises lightweight and high-fidelity variants, supporting diverse tasks including text-to-image and image-to-video generation. Human evaluations demonstrate significant improvements over state-of-the-art methods in both fidelity and generation speed. To foster reproducibility and practical adoption, we fully open-source the codebase and model checkpoints.
๐ Abstract
This report introduces Kandinsky 5.0, a family of state-of-the-art foundation models for high-resolution image and 10-second video synthesis. The framework comprises three core line-up of models: Kandinsky 5.0 Image Lite - a line-up of 6B parameter image generation models, Kandinsky 5.0 Video Lite - a fast and lightweight 2B parameter text-to-video and image-to-video models, and Kandinsky 5.0 Video Pro - 19B parameter models that achieves superior video generation quality. We provide a comprehensive review of the data curation lifecycle - including collection, processing, filtering and clustering - for the multi-stage training pipeline that involves extensive pre-training and incorporates quality-enhancement techniques such as self-supervised fine-tuning (SFT) and reinforcement learning (RL)-based post-training. We also present novel architectural, training, and inference optimizations that enable Kandinsky 5.0 to achieve high generation speeds and state-of-the-art performance across various tasks, as demonstrated by human evaluation. As a large-scale, publicly available generative framework, Kandinsky 5.0 leverages the full potential of its pre-training and subsequent stages to be adapted for a wide range of generative applications. We hope that this report, together with the release of our open-source code and training checkpoints, will substantially advance the development and accessibility of high-quality generative models for the research community.