GenDeg: Diffusion-Based Degradation Synthesis for Generalizable All-in-One Image Restoration

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization of All-in-One Image Restoration (AIOR) models under out-of-distribution degradations and the scarcity of real paired degradation data, this paper proposes GenDeg, a controllable degradation synthesis framework built upon latent diffusion models. GenDeg introduces a novel dual-conditioning mechanism that jointly controls degradation type and intensity, supporting six degradation categories—haze, rain, snow, motion blur, low-light, and raindrops—with continuous intensity adjustment. We further construct GenDS, the first large-scale synthetic-real hybrid dataset comprising over 750k samples. AIOR models trained on GenDS achieve state-of-the-art generalization across multiple benchmarks: average PSNR improves by 1.82 dB across unseen degradation types, and robustness on entirely unseen degradations increases by over 40%, demonstrating the effective transferability of controllably synthesized degradations to real-world restoration tasks.

Technology Category

Application Category

📝 Abstract
Deep learning-based models for All-In-One Image Restoration (AIOR) have achieved significant advancements in recent years. However, their practical applicability is limited by poor generalization to samples outside the training distribution. This limitation arises primarily from insufficient diversity in degradation variations and scenes within existing datasets, resulting in inadequate representations of real-world scenarios. Additionally, capturing large-scale real-world paired data for degradations such as haze, low-light, and raindrops is often cumbersome and sometimes infeasible. In this paper, we leverage the generative capabilities of latent diffusion models to synthesize high-quality degraded images from their clean counterparts. Specifically, we introduce GenDeg, a degradation and intensity-aware conditional diffusion model capable of producing diverse degradation patterns on clean images. Using GenDeg, we synthesize over 550k samples across six degradation types: haze, rain, snow, motion blur, low-light, and raindrops. These generated samples are integrated with existing datasets to form the GenDS dataset, comprising over 750k samples. Our experiments reveal that image restoration models trained on the GenDS dataset exhibit significant improvements in out-of-distribution performance compared to those trained solely on existing datasets. Furthermore, we provide comprehensive analyses on implications of diffusion model-based synthetic degradations for AIOR.
Problem

Research questions and friction points this paper is trying to address.

Limited generalization of AIOR models to unseen data
Insufficient diversity in existing degradation datasets
Challenges in capturing real-world paired degradation data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages latent diffusion models for degradation synthesis
Introduces GenDeg, a degradation-aware conditional diffusion model
Synthesizes 550k samples across six degradation types
🔎 Similar Papers
No similar papers found.