FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models for video super-resolution (VSR) suffer from high latency, excessive computational cost, and poor generalization to ultra-high resolutions—hindering real-time deployment. To address these limitations, we propose the first single-stage, streaming-capable real-time VSR framework. Our method introduces three key innovations: (1) a novel three-stage knowledge distillation pipeline enabling efficient teacher-to-student model transfer; (2) a locality-constrained sparse attention mechanism that drastically reduces memory footprint and computational complexity; and (3) a lightweight conditional decoder that balances model efficiency with reconstruction fidelity. Evaluated on our large-scale VSR-120K dataset (120K videos), the framework achieves real-time inference at 17 FPS for 768×1408 video on a single A100 GPU—12× faster than state-of-the-art methods—while attaining superior PSNR and SSIM. Moreover, it seamlessly scales to 4K+ resolutions.

Technology Category

Application Category

📝 Abstract
Diffusion models have recently advanced video restoration, but applying them to real-world video super-resolution (VSR) remains challenging due to high latency, prohibitive computation, and poor generalization to ultra-high resolutions. Our goal in this work is to make diffusion-based VSR practical by achieving efficiency, scalability, and real-time performance. To this end, we propose FlashVSR, the first diffusion-based one-step streaming framework towards real-time VSR. FlashVSR runs at approximately 17 FPS for 768x1408 videos on a single A100 GPU by combining three complementary innovations: (i) a train-friendly three-stage distillation pipeline that enables streaming super-resolution, (ii) locality-constrained sparse attention that cuts redundant computation while bridging the train-test resolution gap, and (iii) a tiny conditional decoder that accelerates reconstruction without sacrificing quality. To support large-scale training, we also construct VSR-120K, a new dataset with 120k videos and 180k images. Extensive experiments show that FlashVSR scales reliably to ultra-high resolutions and achieves state-of-the-art performance with up to 12x speedup over prior one-step diffusion VSR models. We will release the code, pretrained models, and dataset to foster future research in efficient diffusion-based VSR.
Problem

Research questions and friction points this paper is trying to address.

Achieving real-time diffusion-based video super-resolution
Overcoming high latency and computation in VSR
Enhancing generalization to ultra-high resolution videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage distillation pipeline for streaming super-resolution
Locality-constrained sparse attention reduces computation
Tiny conditional decoder accelerates reconstruction quality
🔎 Similar Papers
No similar papers found.