🤖 AI Summary
To address the deployment challenges of GAN-based super-resolution models on resource-constrained devices, this paper proposes a student discriminator-assisted knowledge distillation framework. The method introduces a lightweight student discriminator to collaboratively guide the training of the student generator, mitigating capability mismatch between teacher and student networks. It further incorporates a three-stage progressive training strategy and an enhanced feature map alignment loss to improve reconstruction quality and convergence stability of compact models. Compatible with mainstream architectures—including GCFSR and Real-ESRGAN—the approach achieves significant gains over existing GAN distillation methods across multiple benchmark datasets, yielding average improvements of 0.32 dB in PSNR and 0.008 in SSIM. Moreover, it reduces model parameter count by up to 67%, striking an effective balance between computational efficiency and perceptual fidelity.
📝 Abstract
Generative Adversarial Networks (GANs) achieve excellent performance in generative tasks, such as image super-resolution, but their computational requirements make difficult their deployment on resource-constrained devices. While knowledge distillation is a promising research direction for GAN compression, effectively training a smaller student generator is challenging due to the capacity mismatch between the student generator and the teacher discriminator. In this work, we propose Student Discriminator Assisted Knowledge Distillation (SDAKD), a novel GAN distillation methodology that introduces a student discriminator to mitigate this capacity mismatch. SDAKD follows a three-stage training strategy, and integrates an adapted feature map distillation approach in its last two training stages. We evaluated SDAKD on two well-performing super-resolution GANs, GCFSR and Real-ESRGAN. Our experiments demonstrate consistent improvements over the baselines and SOTA GAN knowledge distillation methods. The SDAKD source code will be made openly available upon acceptance of the paper.