🤖 AI Summary
Diffusion-based generative models incur prohibitive computational cost when employed for Monte Carlo posterior sampling in Bayesian inverse problems—e.g., computational imaging. To address this, we introduce, for the first time, the multilevel Monte Carlo (MLMC) framework into diffusion-based Bayesian computation. Our approach constructs a hierarchy of diffusion models with jointly optimized accuracy–cost trade-offs, enabling variance reduction across levels while preserving posterior sampling fidelity. Crucially, it significantly reduces the number of neural network evaluations required per sample without sacrificing statistical accuracy. Evaluated on three canonical imaging benchmarks—including deblurring, super-resolution, and compressed sensing—our method achieves 4×–8× speedup in computational cost relative to standard diffusion-based samplers. This establishes a scalable, stochastic sampling paradigm for large-scale uncertainty quantification in inverse problems, bridging high-fidelity posterior inference with practical computational efficiency.
📝 Abstract
Generative diffusion models have recently emerged as a powerful strategy to perform stochastic sampling in Bayesian inverse problems, delivering remarkably accurate solutions for a wide range of challenging applications. However, diffusion models often require a large number of neural function evaluations per sample in order to deliver accurate posterior samples. As a result, using diffusion models as stochastic samplers for Monte Carlo integration in Bayesian computation can be highly computationally expensive, particularly in applications that require a substantial number of Monte Carlo samples for conducting uncertainty quantification analyses. This cost is especially high in large-scale inverse problems such as computational imaging, which rely on large neural networks that are expensive to evaluate. With quantitative imaging applications in mind, this paper presents a Multilevel Monte Carlo strategy that significantly reduces the cost of Bayesian computation with diffusion models. This is achieved by exploiting cost-accuracy trade-offs inherent to diffusion models to carefully couple models of different levels of accuracy in a manner that significantly reduces the overall cost of the calculation, without reducing the final accuracy. The proposed approach achieves a $4 imes$-to-$8 imes$ reduction in computational cost w.r.t. standard techniques across three benchmark imaging problems.