🤖 AI Summary
This work addresses Bayesian inverse problems governed by partial differential equations in infinite-dimensional function spaces by proposing a supervised guidance training method that enables efficient posterior-conditional sampling from pretrained diffusion models without requiring additional simulations. Built upon the infinite-dimensional Doob h-transform theory, the approach decomposes the conditional score into an unconditional score and a computable guidance term, thereby achieving the first posterior fine-tuning of diffusion models directly in function space. The proposed training objective circumvents the infeasibility of conventional guidance methods in infinite-dimensional settings and demonstrates both accuracy and stability in generating posterior samples across multiple function-space Bayesian inverse problems.
📝 Abstract
Score-based diffusion models have recently been extended to infinite-dimensional function spaces, with uses such as inverse problems arising from partial differential equations. In the Bayesian formulation of inverse problems, the aim is to sample from a posterior distribution over functions obtained by conditioning a prior on noisy observations. While diffusion models provide expressive priors in function space, the theory of conditioning them to sample from the posterior remains open. We address this, assuming that either the prior lies in the Cameron-Martin space, or is absolutely continuous with respect to a Gaussian measure. We prove that the models can be conditioned using an infinite-dimensional extension of Doob's $h$-transform, and that the conditional score decomposes into an unconditional score and a guidance term. As the guidance term is intractable, we propose a simulation-free score matching objective (called Supervised Guidance Training) enabling efficient and stable posterior sampling. We illustrate the theory with numerical examples on Bayesian inverse problems in function spaces. In summary, our work offers the first function-space method for fine-tuning trained diffusion models to accurately sample from a posterior.