UnfoldLDM: Deep Unfolding-based Blind Image Restoration with Latent Diffusion Priors

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing blind image restoration (BIR) methods—particularly deep unfolding networks (DUNs)—face two key limitations: strong dependence on predefined degradation models, hindering generalization to unseen degradations, and gradient-descent-based optimization that often discards high-frequency details and induces oversmoothing. To address these, we propose Degradation-Resistant Diffusion Unfolding (DR-DU), the first framework integrating latent diffusion models (LDMs) into deep unfolding. DR-DU introduces a degradation-resistant LDM prior (DR-LDM) and a multi-granularity degradation-aware module (MGDA), while incorporating an oversmoothing-correction Transformer (OCFormer) to explicitly recover high-frequency textures. As a plug-and-play architecture requiring no explicit degradation prior, DR-DU achieves state-of-the-art performance across diverse BIR tasks—including blind super-resolution, deblurring, and denoising—significantly improving texture fidelity and compatibility with downstream vision applications.

Technology Category

Application Category

📝 Abstract
Deep unfolding networks (DUNs) combine the interpretability of model-based methods with the learning ability of deep networks, yet remain limited for blind image restoration (BIR). Existing DUNs suffer from: (1) extbf{Degradation-specific dependency}, as their optimization frameworks are tied to a known degradation model, making them unsuitable for BIR tasks; and (2) extbf{Over-smoothing bias}, resulting from the direct feeding of gradient descent outputs, dominated by low-frequency content, into the proximal term, suppressing fine textures. To overcome these issues, we propose UnfoldLDM to integrate DUNs with latent diffusion model (LDM) for BIR. In each stage, UnfoldLDM employs a multi-granularity degradation-aware (MGDA) module as the gradient descent step. MGDA models BIR as an unknown degradation estimation problem and estimates both the holistic degradation matrix and its decomposed forms, enabling robust degradation removal. For the proximal step, we design a degradation-resistant LDM (DR-LDM) to extract compact degradation-invariant priors from the MGDA output. Guided by this prior, an over-smoothing correction transformer (OCFormer) explicitly recovers high-frequency components and enhances texture details. This unique combination ensures the final result is degradation-free and visually rich. Experiments show that our UnfoldLDM achieves a leading place on various BIR tasks and benefits downstream tasks. Moreover, our design is compatible with existing DUN-based methods, serving as a plug-and-play framework. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

Overcoming degradation-specific dependency in blind image restoration methods
Addressing over-smoothing bias that suppresses fine texture details
Integrating deep unfolding networks with latent diffusion priors for BIR
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-granularity degradation-aware module for blind estimation
Degradation-resistant latent diffusion model for invariant priors
Over-smoothing correction transformer recovers high-frequency details
🔎 Similar Papers
No similar papers found.