Knowledge Distillation for Image Restoration : Simultaneous Learning from Degraded and Clean Images

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing restoration quality and model lightweighting in image inpainting model compression, this paper proposes a Simultaneous Learning Knowledge Distillation (SLKD) framework. It employs a dual-teacher–single-student architecture—where one teacher processes degraded inputs and the other handles clean references—to decouple and jointly optimize Degradation Removal Learning (DRL) and Image Reconstruction Learning (IRL). Crucially, it introduces BRISQUE and PIQE—no-reference perceptual quality metrics—to guide feature-level alignment, eliminating reliance on ground-truth labels. Through encoder-decoder co-optimization, SLKD achieves over 80% reduction in FLOPs and parameters across five datasets and three image restoration tasks, with PSNR/SSIM degradation under 0.3 dB. The compressed student model closely matches teacher performance while significantly improving deployment efficiency and cross-dataset generalization.

Technology Category

Application Category

📝 Abstract
Model compression through knowledge distillation has seen extensive application in classification and segmentation tasks. However, its potential in image-to-image translation, particularly in image restoration, remains underexplored. To address this gap, we propose a Simultaneous Learning Knowledge Distillation (SLKD) framework tailored for model compression in image restoration tasks. SLKD employs a dual-teacher, single-student architecture with two distinct learning strategies: Degradation Removal Learning (DRL) and Image Reconstruction Learning (IRL), simultaneously. In DRL, the student encoder learns from Teacher A to focus on removing degradation factors, guided by a novel BRISQUE extractor. In IRL, the student decoder learns from Teacher B to reconstruct clean images, with the assistance of a proposed PIQE extractor. These strategies enable the student to learn from degraded and clean images simultaneously, ensuring high-quality compression of image restoration models. Experimental results across five datasets and three tasks demonstrate that SLKD achieves substantial reductions in FLOPs and parameters, exceeding 80%, while maintaining strong image restoration performance.
Problem

Research questions and friction points this paper is trying to address.

Image Restoration
Model Size Reduction
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synchronous Learning Knowledge Distillation (SLKD)
Efficient Image Restoration
Dual Teacher Model
🔎 Similar Papers
No similar papers found.
Yongheng Zhang
Yongheng Zhang
M.S. Student @ CSU | Research Intern @ Tencent
Artificial IntelligenceLarge Language ModelWorld Model
D
Danfeng Yan
State Key Laboratory of Networking and Switching Technology, BUPT, Beijing, China