Provable Diffusion Posterior Sampling for Bayesian Inversion

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of provably efficient sampling from complex, multimodal posterior distributions in Bayesian inverse problems. We propose a diffusion-based probabilistic transport framework that progressively transports samples from an easy-to-sample source distribution to the target posterior within a plug-and-play architecture. The method integrates warm-start initialization, Langevin dynamics, and data-driven prior score learning. A key innovation is the introduction of a Monte Carlo score estimator that avoids heuristic approximations and enables the first non-asymptotic error bound—quantifying errors arising from score estimation, initialization, and sampling stages—and reveals the critical influence of prior score error and problem condition number on convergence. We establish theoretical convergence guarantees even in multimodal settings. Experiments demonstrate substantial improvements in sampling accuracy and stability across diverse inverse problems.

Technology Category

Application Category

📝 Abstract
This paper proposes a novel diffusion-based posterior sampling method within a plug-and-play (PnP) framework. Our approach constructs a probability transport from an easy-to-sample terminal distribution to the target posterior, using a warm-start strategy to initialize the particles. To approximate the posterior score, we develop a Monte Carlo estimator in which particles are generated using Langevin dynamics, avoiding the heuristic approximations commonly used in prior work. The score governing the Langevin dynamics is learned from data, enabling the model to capture rich structural features of the underlying prior distribution. On the theoretical side, we provide non-asymptotic error bounds, showing that the method converges even for complex, multi-modal target posterior distributions. These bounds explicitly quantify the errors arising from posterior score estimation, the warm-start initialization, and the posterior sampling procedure. Our analysis further clarifies how the prior score-matching error and the condition number of the Bayesian inverse problem influence overall performance. Finally, we present numerical experiments demonstrating the effectiveness of the proposed method across a range of inverse problems.
Problem

Research questions and friction points this paper is trying to address.

Develops a diffusion-based posterior sampling method for Bayesian inversion
Estimates posterior score using Monte Carlo with Langevin dynamics
Provides non-asymptotic error bounds for complex multimodal posteriors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion-based posterior sampling in plug-and-play framework
Estimates posterior score via Monte Carlo with Langevin dynamics
Provides non-asymptotic error bounds for complex posterior distributions
🔎 Similar Papers
No similar papers found.
J
Jinyuan Chang
Joint Laboratory of Data Science and Business Intelligence, Southwestern University of Finance and Economics, Chengdu, Sichuan 611130, China.
Chenguang Duan
Chenguang Duan
Postdoctoral researcher, RWTH Aachen University
Scientific machine learningLearning theoryGenerative modelsNonparametric statistics
Y
Yuling Jiao
School of Artificial Intelligence, Wuhan University, Wuhan, Hubei 430072, China.
Ruoxuan Li
Ruoxuan Li
Columbia University
computational cognitive sciencecomputational social science
J
Jerry Zhijian Yang
School of Mathematics and Statistics, Wuhan University, Wuhan, Hubei 430072, China.
Cheng Yuan
Cheng Yuan
Associate Professor, School of Mathematics and Statistics, Central China Normal University
Computational PhysicsDeep Learning