🤖 AI Summary
This work establishes the first rigorous convergence theory for Plug-and-Play proximal gradient descent (PnP-PGD) under prior mismatch—i.e., when the denoiser’s training distribution differs from the inference task distribution. By introducing novel analytical techniques, we prove for the first time that PnP-PGD remains convergent even in this more challenging setting, while dispensing with several strong assumptions commonly required in prior analyses, such as Lipschitz continuity or non-local means properties. Our results significantly enhance the theoretical reliability of PnP methods in practical applications and provide a solid foundation for algorithm design under prior mismatch conditions.
📝 Abstract
In this work, we provide a new convergence theory for plug-and-play proximal gradient descent (PnP-PGD) under prior mismatch where the denoiser is trained on a different data distribution to the inference task at hand. To the best of our knowledge, this is the first convergence proof of PnP-PGD under prior mismatch. Compared with the existing theoretical results for PnP algorithms, our new results removed the need for several restrictive and unverifiable assumptions. Moreover, we derive the convergence theory for equivariant PnP (EPnP) under the prior mismatch setting, proving that EPnP reduces error variance and explicitly tightens the convergence bound.