🤖 AI Summary
Existing one-step diffusion-based super-resolution methods rely on teacher-model distillation, limiting performance to the teacher’s capability. This work proposes D³SR, the first teacher-free, end-to-end one-step diffusion framework for real-world image super-resolution (Real-ISR). Our approach addresses the teacher-dependency bottleneck via two key innovations: (1) a large-scale diffusion discriminator (D³SR), which directly distills noise features in latent space at arbitrary timesteps without teacher guidance; and (2) an edge-aware DISTS loss (EA-DISTS), specifically designed to enhance high-frequency detail reconstruction. Extensive experiments demonstrate that D³SR matches or surpasses state-of-the-art multi-step diffusion models across standard metrics—including PSNR, SSIM, and LPIPS—while achieving ≥3× faster inference speed and reducing model parameters by ≥30%. To our knowledge, this is the first diffusion-based Real-ISR method that eliminates teacher distillation entirely, offering both superior efficiency and competitive fidelity.
📝 Abstract
Diffusion models have demonstrated excellent performance for real-world image super-resolution (Real-ISR), albeit at high computational costs. Most existing methods are trying to derive one-step diffusion models from multi-step counterparts through knowledge distillation (KD) or variational score distillation (VSD). However, these methods are limited by the capabilities of the teacher model, especially if the teacher model itself is not sufficiently strong. To tackle these issues, we propose a new One-Step extbf{D}iffusion model with a larger-scale extbf{D}iffusion extbf{D}iscriminator for SR, called D$^3$SR. Our discriminator is able to distill noisy features from any time step of diffusion models in the latent space. In this way, our diffusion discriminator breaks through the potential limitations imposed by the presence of a teacher model. Additionally, we improve the perceptual loss with edge-aware DISTS (EA-DISTS) to enhance the model's ability to generate fine details. Our experiments demonstrate that, compared with previous diffusion-based methods requiring dozens or even hundreds of steps, our D$^3$SR attains comparable or even superior results in both quantitative metrics and qualitative evaluations. Moreover, compared with other methods, D$^3$SR achieves at least $3 imes$ faster inference speed and reduces parameters by at least 30%. We will release code and models at https://github.com/JianzeLi-114/D3SR.