🤖 AI Summary
This work addresses the high computational cost of existing diffusion models in Bayesian inverse problems, which typically rely on vector-Jacobian products during inference. To overcome this limitation, the authors propose a lightweight likelihood surrogate that enables zero-shot guidance of pre-trained diffusion models for conditional sampling—without requiring backpropagation or gradient computation through the denoising network. This approach achieves, for the first time, gradient-free zero-shot diffusion guidance, substantially reducing computational overhead while maintaining competitive performance. Empirical results across multiple inverse problem benchmarks demonstrate that the method attains Pareto-optimal trade-offs between speed and accuracy, establishing it as the current state-of-the-art solution in both efficiency and performance.
📝 Abstract
Pretrained diffusion models serve as effective priors for Bayesian inverse problems. These priors enable zero-shot generation by sampling from the conditional distribution, which avoids the need for task-specific retraining. However, a major limitation of existing methods is their reliance on surrogate likelihoods that require vector-Jacobian products at each denoising step, creating a substantial computational burden. To address this, we introduce a lightweight likelihood surrogate that eliminates the need to calculate gradients through the denoiser network. This enables us to handle diverse inverse problems without backpropagation overhead. Experiments confirm that using our method, the inference cost drops dramatically. At the same time, our approach delivers the highest results in multiple tasks. Broadly speaking, we propose the fastest and Pareto optimal method for Bayesian inverse problems.