🤖 AI Summary
Sampling from posterior distributions in large-scale Bayesian inverse problems using score-based diffusion models is computationally prohibitive due to the need for numerous forward model evaluations during sampling.
Method: We propose Scalable Diffusion Posterior Sampling (SDPS), which learns a task-specific score function offline—eliminating all forward model calls during sampling—and integrates an analytically derived affine transformation to bridge conditional and unconditional scores. Our framework is rigorously extended to infinite-dimensional function spaces via infinite-dimensional stochastic differential equations (SDEs).
Contributions/Results: SDPS achieves exact posterior sampling without conditional score approximation—a first in the literature—and provides provable convergence guarantees in infinite dimensions. In high-dimensional computed tomography (CT) reconstruction experiments, SDPS generates samples with zero forward model evaluations per sample, drastically reducing computational cost while maintaining theoretical soundness and empirical accuracy.
📝 Abstract
Score-based diffusion models (SDMs) have emerged as a powerful tool for sampling from the posterior distribution in Bayesian inverse problems. However, existing methods often require multiple evaluations of the forward mapping to generate a single sample, resulting in significant computational costs for large-scale inverse problems. To address this issue, we propose a scalable diffusion posterior sampling (SDPS) method to bypass forward mapping evaluations during sampling by shifting computational effort to an offline training phase, where a task-dependent score is learned based on the forward mapping. Crucially, the conditional posterior score is then derived from this trained score using affine transformations, ensuring no conditional score approximation is needed. The approach is shown to generalize to infinite-dimensional diffusion models and is validated through rigorous convergence analysis and high-dimensional CT imaging experiments.