🤖 AI Summary
Maximum a posteriori (MAP) estimation in inverse problems often replaces the analytically intractable proximal operator with a pre-trained denoiser—a widely adopted yet theoretically ungrounded heuristic.
Method: We propose a structurally simple, algorithmically faithful framework for MAP inference. Under the reasonable assumption that the prior is log-concave, we prove global convergence of the proposed algorithm and establish its equivalence to gradient descent on a smoothed proximal objective.
Contribution/Results: This work provides the first rigorous convergence guarantee for denoiser-based proximal operator approximation—bridging, for the first time at the theoretical level, empirical denoising priors (e.g., DnCNN, DDPM) with classical optimization paradigms. It establishes a sound optimization foundation for mainstream methodologies including Denoising-based Compressed Sensing (DnC) and score-matching-driven inverse problem solving.
📝 Abstract
Denoiser models have become powerful tools for inverse problems, enabling the use of pretrained networks to approximate the score of a smoothed prior distribution. These models are often used in heuristic iterative schemes aimed at solving Maximum a Posteriori (MAP) optimisation problems, where the proximal operator of the negative log-prior plays a central role. In practice, this operator is intractable, and practitioners plug in a pretrained denoiser as a surrogate-despite the lack of general theoretical justification for this substitution. In this work, we show that a simple algorithm, closely related to several used in practice, provably converges to the proximal operator under a log-concavity assumption on the prior $p$. We show that this algorithm can be interpreted as a gradient descent on smoothed proximal objectives. Our analysis thus provides a theoretical foundation for a class of empirically successful but previously heuristic methods.