🤖 AI Summary
This paper addresses the mismatch between denoisers and optimization priors—and the lack of theoretical guarantees—in plug-and-play (PnP) image restoration. We propose the Gradient-Step Denoiser (GSD), which explicitly models deep denoising networks as gradient descent steps or proximal operators within an optimization framework. Methodologically, we design a differentiable architecture grounded in proximal operator theory and jointly train the denoiser with the optimization iterations to ensure algorithmic convergence. Theoretically, GSD unifies implicit deep priors with explicit optimization models, significantly enhancing interpretability and stability. Experiments demonstrate that GSD achieves state-of-the-art denoising performance across diverse inverse problems—including deblurring and computed tomography (CT) reconstruction—while providing rigorous convergence guarantees. This enables efficient and robust image restoration without sacrificing theoretical soundness.
📝 Abstract
In this paper we analyze the Gradient-Step Denoiser and its usage in Plug-and-Play algorithms. The Plug-and-Play paradigm of optimization algorithms uses off the shelf denoisers to replace a proximity operator or a gradient descent operator of an image prior. Usually this image prior is implicit and cannot be expressed, but the Gradient-Step Denoiser is trained to be exactly the gradient descent operator or the proximity operator of an explicit functional while preserving state-of-the-art denoising capabilities.