From Image Denoisers to Regularizing Imaging Inverse Problems: An Overview

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses inverse problems in medical imaging, remote sensing, and microscopy. Methodologically, it introduces a unified regularization framework leveraging pre-trained deep denoisers as implicit priors, embedded within proximal optimization algorithms (e.g., ADMM, PGD). By establishing a theoretical link to score estimation via the Tweedie formula, the framework systematically unifies Plug-and-Play (PnP) and Regularization-by-Denoising (RED) paradigms. To ensure convergence, it imposes non-expansiveness constraints, conducts Lipschitz continuity analysis, and adopts a local homogeneity assumption. The key contribution is a rigorous bridge between deep priors and classical optimization theory, enabling high-fidelity reconstructions across multimodal imaging tasks. Experiments demonstrate significant improvements in structural fidelity and quantitative metrics (e.g., PSNR, SSIM). This approach establishes a new paradigm for data-driven inverse problem solving—grounded in theoretical soundness while maintaining practical robustness and versatility.

Technology Category

Application Category

📝 Abstract
Inverse problems lie at the heart of modern imaging science, with broad applications in areas such as medical imaging, remote sensing, and microscopy. Recent years have witnessed a paradigm shift in solving imaging inverse problems, where data-driven regularizers are used increasingly, leading to remarkably high-fidelity reconstruction. A particularly notable approach for data-driven regularization is to use learned image denoisers as implicit priors in iterative image reconstruction algorithms. This survey presents a comprehensive overview of this powerful and emerging class of algorithms, commonly referred to as plug-and-play (PnP) methods. We begin by providing a brief background on image denoising and inverse problems, followed by a short review of traditional regularization strategies. We then explore how proximal splitting algorithms, such as the alternating direction method of multipliers (ADMM) and proximal gradient descent (PGD), can naturally accommodate learned denoisers in place of proximal operators, and under what conditions such replacements preserve convergence. The role of Tweedie's formula in connecting optimal Gaussian denoisers and score estimation is discussed, which lays the foundation for regularization-by-denoising (RED) and more recent diffusion-based posterior sampling methods. We discuss theoretical advances regarding the convergence of PnP algorithms, both within the RED and proximal settings, emphasizing the structural assumptions that the denoiser must satisfy for convergence, such as non-expansiveness, Lipschitz continuity, and local homogeneity. We also address practical considerations in algorithm design, including choices of denoiser architecture and acceleration strategies.
Problem

Research questions and friction points this paper is trying to address.

Using denoisers as priors for solving imaging inverse problems
Analyzing convergence conditions for plug-and-play reconstruction methods
Connecting denoising techniques to regularization via Tweedie's formula
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play methods with learned denoisers
Proximal splitting algorithms accommodate denoisers
Tweedie's formula connects denoisers to score estimation
🔎 Similar Papers
No similar papers found.