Universal Image Restoration Pre-training via Degradation Classification

📅 2025-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited generalization of image restoration models across diverse degradation types, this paper proposes Degradation Classification Pre-Training (DCPT): a self-supervised pre-training framework that leverages degradation type as an extremely weak supervision signal. DCPT employs a two-stage encoder–lightweight-decoder architecture and requires only degraded images for training. It is the first work to formulate degradation classification as a universal pre-training task for image restoration, enabling cross-degradation transfer while fully reusing all pre-trained parameters—including the decoder—thereby avoiding knowledge loss incurred by discarding the decoder in conventional approaches. On the 10D “all-in-one” restoration benchmark, DCPT achieves a +2.55 dB PSNR gain; under mixed-degradation scenarios, it improves PSNR by +6.53 dB. Moreover, it significantly enhances the robustness and generalization capability of both CNN- and Transformer-based backbones.

Technology Category

Application Category

📝 Abstract
This paper proposes the Degradation Classification Pre-Training (DCPT), which enables models to learn how to classify the degradation type of input images for universal image restoration pre-training. Unlike the existing self-supervised pre-training methods, DCPT utilizes the degradation type of the input image as an extremely weak supervision, which can be effortlessly obtained, even intrinsic in all image restoration datasets. DCPT comprises two primary stages. Initially, image features are extracted from the encoder. Subsequently, a lightweight decoder, such as ResNet18, is leveraged to classify the degradation type of the input image solely based on the features extracted in the first stage, without utilizing the input image. The encoder is pre-trained with a straightforward yet potent DCPT, which is used to address universal image restoration and achieve outstanding performance. Following DCPT, both convolutional neural networks (CNNs) and transformers demonstrate performance improvements, with gains of up to 2.55 dB in the 10D all-in-one restoration task and 6.53 dB in the mixed degradation scenarios. Moreover, previous self-supervised pretraining methods, such as masked image modeling, discard the decoder after pre-training, while our DCPT utilizes the pre-trained parameters more effectively. This superiority arises from the degradation classifier acquired during DCPT, which facilitates transfer learning between models of identical architecture trained on diverse degradation types. Source code and models are available at https://github.com/MILab-PKU/dcpt.
Problem

Research questions and friction points this paper is trying to address.

Image Restoration
Degradation Identification
Model Improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

DCPT
Image Degradation Classification
Pre-training for Image Restoration
🔎 Similar Papers
No similar papers found.