Distillation-Supervised Convolutional Low-Rank Adaptation for Efficient Image Super-Resolution

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between performance gains and computational overhead in lightweight CNN-based image super-resolution, this paper proposes DSCLoRA—a novel framework that pioneers the integration of Low-Rank Adaptation (LoRA) into efficient super-resolution CNNs. We design the SConvLB module and couple it with spatial feature affinity–guided knowledge distillation to jointly optimize second-order statistical information transfer and low-rank parameter adaptation. Additionally, we incorporate pixel-shuffle–based enhancement, substitution with the SPAB module, and adoption of the SPAN lightweight architecture. Crucially, DSCLoRA achieves substantial reconstruction quality improvements without increasing network depth, parameter count, or inference latency. In the NTIRE 2025 Efficient Super-Resolution Challenge, DSCLoRA ranks first overall, outperforming the baseline SPAN model in both PSNR and SSIM while maintaining identical model size and inference efficiency.

Technology Category

Application Category

📝 Abstract
Convolutional neural networks (CNNs) have been widely used in efficient image super-resolution. However, for CNN-based methods, performance gains often require deeper networks and larger feature maps, which increase complexity and inference costs. Inspired by LoRA's success in fine-tuning large language models, we explore its application to lightweight models and propose Distillation-Supervised Convolutional Low-Rank Adaptation (DSCLoRA), which improves model performance without increasing architectural complexity or inference costs. Specifically, we integrate ConvLoRA into the efficient SR network SPAN by replacing the SPAB module with the proposed SConvLB module and incorporating ConvLoRA layers into both the pixel shuffle block and its preceding convolutional layer. DSCLoRA leverages low-rank decomposition for parameter updates and employs a spatial feature affinity-based knowledge distillation strategy to transfer second-order statistical information from teacher models (pre-trained SPAN) to student models (ours). This method preserves the core knowledge of lightweight models and facilitates optimal solution discovery under certain conditions. Experiments on benchmark datasets show that DSCLoRA improves PSNR and SSIM over SPAN while maintaining its efficiency and competitive image quality. Notably, DSCLoRA ranked first in the Overall Performance Track of the NTIRE 2025 Efficient Super-Resolution Challenge. Our code and models are made publicly available at https://github.com/Yaozzz666/DSCF-SR.
Problem

Research questions and friction points this paper is trying to address.

Improves image super-resolution without increasing complexity
Applies low-rank adaptation to lightweight CNN models
Enhances performance via knowledge distillation from teacher models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses ConvLoRA for efficient SR adaptation
Integrates SConvLB module into SPAN network
Employs spatial feature affinity distillation
🔎 Similar Papers
No similar papers found.