🤖 AI Summary
Diffusion Transformers (DiTs) incur prohibitive computational and memory overhead in high-resolution 2K video synthesis, hindering practical deployment.
Method: We propose a compressed latent-space + knowledge distillation + hierarchical two-stage synthesis framework. First, we identify structural similarity in internal DiT representations to enable cross-architecture teacher–student distillation. Second, we design a highly compressed VAE latent space to eliminate redundant encoding/decoding. Third, we introduce a multi-level feature-guided two-stage synthesis architecture that jointly ensures temporal consistency and fine-grained detail fidelity.
Contribution/Results: On 5-second, 24-fps 2K video generation, our method achieves 20× faster inference than state-of-the-art approaches, with substantial reductions in GPU memory consumption and FLOPs. The framework supports end-to-end deployment on commodity hardware, enabling practical high-resolution diffusion-based video synthesis.
📝 Abstract
Demand for 2K video synthesis is rising with increasing consumer expectations for ultra-clear visuals. While diffusion transformers (DiTs) have demonstrated remarkable capabilities in high-quality video generation, scaling them to 2K resolution remains computationally prohibitive due to quadratic growth in memory and processing costs. In this work, we propose Turbo2K, an efficient and practical framework for generating detail-rich 2K videos while significantly improving training and inference efficiency. First, Turbo2K operates in a highly compressed latent space, reducing computational complexity and memory footprint, making high-resolution video synthesis feasible. However, the high compression ratio of the VAE and limited model size impose constraints on generative quality. To mitigate this, we introduce a knowledge distillation strategy that enables a smaller student model to inherit the generative capacity of a larger, more powerful teacher model. Our analysis reveals that, despite differences in latent spaces and architectures, DiTs exhibit structural similarities in their internal representations, facilitating effective knowledge transfer. Second, we design a hierarchical two-stage synthesis framework that first generates multi-level feature at lower resolutions before guiding high-resolution video generation. This approach ensures structural coherence and fine-grained detail refinement while eliminating redundant encoding-decoding overhead, further enhancing computational efficiency.Turbo2K achieves state-of-the-art efficiency, generating 5-second, 24fps, 2K videos with significantly reduced computational cost. Compared to existing methods, Turbo2K is up to 20$ imes$ faster for inference, making high-resolution video generation more scalable and practical for real-world applications.