Efficient Approximate Posterior Sampling with Annealed Langevin Monte Carlo

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficient posterior sampling from $p(x|y)$ in score-based generative models. Methodologically, it integrates annealed Langevin dynamics with a learned score network to jointly encode prior structure and observational constraints, enabling stable sampling under distributional shift. Theoretically, it is the first method to simultaneously approximate, within polynomial time, both the KL divergence between the noisy prior posterior and the true posterior, and the Fisher divergence of the approximate posterior relative to the true posterior—thereby providing rigorous convergence guarantees for approximate sampling. Experiments on inverse problems—including image super-resolution and style transfer—demonstrate that the proposed approach generates samples that strictly satisfy data consistency while preserving high-fidelity prior structure, significantly outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
We study the problem of posterior sampling in the context of score based generative models. We have a trained score network for a prior $p(x)$, a measurement model $p(y|x)$, and are tasked with sampling from the posterior $p(x|y)$. Prior work has shown this to be intractable in KL (in the worst case) under well-accepted computational hardness assumptions. Despite this, popular algorithms for tasks such as image super-resolution, stylization, and reconstruction enjoy empirical success. Rather than establishing distributional assumptions or restricted settings under which exact posterior sampling is tractable, we view this as a more general "tilting" problem of biasing a distribution towards a measurement. Under minimal assumptions, we show that one can tractably sample from a distribution that is simultaneously close to the posterior of a noised prior in KL divergence and the true posterior in Fisher divergence. Intuitively, this combination ensures that the resulting sample is consistent with both the measurement and the prior. To the best of our knowledge these are the first formal results for (approximate) posterior sampling in polynomial time.
Problem

Research questions and friction points this paper is trying to address.

Sampling from posterior in score-based generative models
Overcoming intractability under computational hardness assumptions
Ensuring sample consistency with measurement and prior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Annealed Langevin Monte Carlo sampling
Combining KL and Fisher divergence
Polynomial time approximate posterior sampling
🔎 Similar Papers
No similar papers found.
Advait Parulekar
Advait Parulekar
Graduate Student
Machine Learning
Litu Rout
Litu Rout
University of Texas at Austin
Machine LearningGenerative ModelingSamplingOptimization
K
Karthikeyan Shanmugam
Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin
S
Sanjay Shakkottai
Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin