Posterior Sampling by Combining Diffusion Models with Annealed Langevin Dynamics

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses efficient sampling from the posterior distribution $p(x mid y)$ in noisy linear inverse problems—such as image inpainting, deblurring, and MRI reconstruction. We propose a novel framework that integrates pretrained diffusion models with annealed Langevin dynamics: the diffusion model provides a score function estimate, while temperature-annealed Langevin sampling enables conditional posterior sampling. Theoretically, we establish convergence guarantees under only an $L^4$-norm error bound on the score estimator, substantially relaxing accuracy requirements; under a log-concave prior assumption, we further provide the first polynomial-time convergence and robustness guarantees for such diffusion-based posterior sampling. Experiments demonstrate that our method achieves both high sampling efficiency and superior reconstruction fidelity across diverse inverse problems.

Technology Category

Application Category

📝 Abstract
Given a noisy linear measurement $y = Ax + ξ$ of a distribution $p(x)$, and a good approximation to the prior $p(x)$, when can we sample from the posterior $p(x mid y)$? Posterior sampling provides an accurate and fair framework for tasks such as inpainting, deblurring, and MRI reconstruction, and several heuristics attempt to approximate it. Unfortunately, approximate posterior sampling is computationally intractable in general. To sidestep this hardness, we focus on (local or global) log-concave distributions $p(x)$. In this regime, Langevin dynamics yields posterior samples when the exact scores of $p(x)$ are available, but it is brittle to score--estimation error, requiring an MGF bound (sub-exponential error). By contrast, in the unconditional setting, diffusion models succeed with only an $L^2$ bound on the score error. We prove that combining diffusion models with an annealed variant of Langevin dynamics achieves conditional sampling in polynomial time using merely an $L^4$ bound on the score error.
Problem

Research questions and friction points this paper is trying to address.

Sampling from posterior distributions given noisy measurements
Overcoming computational intractability in approximate posterior sampling
Achieving conditional sampling with polynomial time complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining diffusion models with annealed Langevin dynamics
Achieving conditional sampling under L4 score error bound
Enabling polynomial-time posterior sampling for log-concave distributions
🔎 Similar Papers
No similar papers found.