🤖 AI Summary
Traditional Plug-and-Play Priors (PnP) methods embed denoising priors solely in the image domain, limiting their ability to preserve structural details. To address this, this work pioneers the extension of the PnP framework to the *analysis domain*—specifically, the gradient domain—introducing Analysis-domain PnP (APnP). Our key innovation is a learnable gradient-domain denoiser, formulated as a data-driven, analysis-based total variation regularization. Leveraging this implicit prior, we develop two efficient reconstruction algorithms: APnP-HQS and APnP-ADMM. Extensive experiments on image deblurring and super-resolution demonstrate that APnP achieves performance on par with conventional image-domain PnP methods, while offering superior edge preservation and stronger theoretical consistency. These results validate the effectiveness, feasibility, and practical utility of the analysis-domain PnP paradigm.
📝 Abstract
Plug-and-Play Priors (PnP) is a popular framework for solving imaging inverse problems by integrating learned priors in the form of denoisers trained to remove Gaussian noise from images. In standard PnP methods, the denoiser is applied directly in the image domain, serving as an implicit prior on natural images. This paper considers an alternative analysis formulation of PnP, in which the prior is imposed on a transformed representation of the image, such as its gradient. Specifically, we train a Gaussian denoiser to operate in the gradient domain, rather than on the image itself. Conceptually, this is an extension of total variation (TV) regularization to learned TV regularization. To incorporate this gradient-domain prior in image reconstruction algorithms, we develop two analysis PnP algorithms based on half-quadratic splitting (APnP-HQS) and the alternating direction method of multipliers (APnP-ADMM). We evaluate our approach on image deblurring and super-resolution, demonstrating that the analysis formulation achieves performance comparable to image-domain PnP algorithms.