🤖 AI Summary
This work addresses the challenge of provably efficient sampling from complex, multimodal posterior distributions in Bayesian inverse problems. We propose a diffusion-based probabilistic transport framework that progressively transports samples from an easy-to-sample source distribution to the target posterior within a plug-and-play architecture. The method integrates warm-start initialization, Langevin dynamics, and data-driven prior score learning. A key innovation is the introduction of a Monte Carlo score estimator that avoids heuristic approximations and enables the first non-asymptotic error bound—quantifying errors arising from score estimation, initialization, and sampling stages—and reveals the critical influence of prior score error and problem condition number on convergence. We establish theoretical convergence guarantees even in multimodal settings. Experiments demonstrate substantial improvements in sampling accuracy and stability across diverse inverse problems.
📝 Abstract
This paper proposes a novel diffusion-based posterior sampling method within a plug-and-play (PnP) framework. Our approach constructs a probability transport from an easy-to-sample terminal distribution to the target posterior, using a warm-start strategy to initialize the particles. To approximate the posterior score, we develop a Monte Carlo estimator in which particles are generated using Langevin dynamics, avoiding the heuristic approximations commonly used in prior work. The score governing the Langevin dynamics is learned from data, enabling the model to capture rich structural features of the underlying prior distribution. On the theoretical side, we provide non-asymptotic error bounds, showing that the method converges even for complex, multi-modal target posterior distributions. These bounds explicitly quantify the errors arising from posterior score estimation, the warm-start initialization, and the posterior sampling procedure. Our analysis further clarifies how the prior score-matching error and the condition number of the Bayesian inverse problem influence overall performance. Finally, we present numerical experiments demonstrating the effectiveness of the proposed method across a range of inverse problems.