🤖 AI Summary
This work addresses the challenge of conditional sampling in generative diffusion models for Bayesian inverse problems. It systematically surveys and unifies two dominant paradigms: end-to-end methods based on the joint distribution, and decoupled approaches combining a pre-trained marginal distribution with an explicit likelihood model. We propose, for the first time, a theoretically consistent unified framework that integrates Monte Carlo sampling, diffusion process reweighting, conditional probability construction, and fine-tuning techniques—rigorously characterizing the underlying assumptions and intrinsic relationships among these methods. The framework bridges theoretical gaps across disparate conditional generation strategies and delivers a scalable, interpretable, and theoretically grounded toolkit for conditional sampling in scientific computing inverse problems, including image reconstruction and physics-based simulation.
📝 Abstract
Generative diffusions are a powerful class of Monte Carlo samplers that leverage bridging Markov processes to approximate complex, high-dimensional distributions, such as those found in image processing and language models. Despite their success in these domains, an important open challenge remains: extending these techniques to sample from conditional distributions, as required in, for example, Bayesian inverse problems. In this paper, we present a comprehensive review of existing computational approaches to conditional sampling within generative diffusion models. Specifically, we highlight key methodologies that either utilise the joint distribution, or rely on (pre-trained) marginal distributions with explicit likelihoods, to construct conditional generative samplers.