SDAKD: Student Discriminator Assisted Knowledge Distillation for Super-Resolution Generative Adversarial Networks

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the deployment challenges of GAN-based super-resolution models on resource-constrained devices, this paper proposes a student discriminator-assisted knowledge distillation framework. The method introduces a lightweight student discriminator to collaboratively guide the training of the student generator, mitigating capability mismatch between teacher and student networks. It further incorporates a three-stage progressive training strategy and an enhanced feature map alignment loss to improve reconstruction quality and convergence stability of compact models. Compatible with mainstream architectures—including GCFSR and Real-ESRGAN—the approach achieves significant gains over existing GAN distillation methods across multiple benchmark datasets, yielding average improvements of 0.32 dB in PSNR and 0.008 in SSIM. Moreover, it reduces model parameter count by up to 67%, striking an effective balance between computational efficiency and perceptual fidelity.

Technology Category

Application Category

📝 Abstract
Generative Adversarial Networks (GANs) achieve excellent performance in generative tasks, such as image super-resolution, but their computational requirements make difficult their deployment on resource-constrained devices. While knowledge distillation is a promising research direction for GAN compression, effectively training a smaller student generator is challenging due to the capacity mismatch between the student generator and the teacher discriminator. In this work, we propose Student Discriminator Assisted Knowledge Distillation (SDAKD), a novel GAN distillation methodology that introduces a student discriminator to mitigate this capacity mismatch. SDAKD follows a three-stage training strategy, and integrates an adapted feature map distillation approach in its last two training stages. We evaluated SDAKD on two well-performing super-resolution GANs, GCFSR and Real-ESRGAN. Our experiments demonstrate consistent improvements over the baselines and SOTA GAN knowledge distillation methods. The SDAKD source code will be made openly available upon acceptance of the paper.
Problem

Research questions and friction points this paper is trying to address.

Compressing GANs for deployment on resource-constrained devices
Addressing capacity mismatch in student-teacher GAN distillation
Improving super-resolution GAN performance via novel distillation method
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces student discriminator to reduce capacity mismatch
Uses three-stage training strategy for GAN distillation
Integrates adapted feature map distillation approach
🔎 Similar Papers
No similar papers found.