🤖 AI Summary
In Bayesian computational imaging, diffusion models (DMs) suffer from low posterior sampling efficiency and poor adaptability to diverse forward models. To address this, we propose a conditional sampler construction framework integrating deep unrolling with model distillation. Our key contribution is the first application of deep unrolling to the LATINO Langevin MCMC algorithm, transforming a pre-trained DM prior into a lightweight, few-step (≤10 steps) conditional sampling network. Crucially, end-to-end distillation enables forward-model-agnostic generalization—eliminating the need for retraining when deploying to new forward operators. Experiments on CT and MRI reconstruction demonstrate that our method significantly outperforms existing diffusion-guided samplers: it achieves comparable or superior reconstruction fidelity while accelerating inference by 5–20×. The resulting sampler is both computationally efficient and broadly applicable across imaging modalities without task-specific fine-tuning.
📝 Abstract
Diffusion models (DMs) have emerged as powerful image priors in Bayesian computational imaging. Two primary strategies have been proposed for leveraging DMs in this context: Plug-and-Play methods, which are zero-shot and highly flexible but rely on approximations; and specialized conditional DMs, which achieve higher accuracy and faster inference for specific tasks through supervised training. In this work, we introduce a novel framework that integrates deep unfolding and model distillation to transform a DM image prior into a few-step conditional model for posterior sampling. A central innovation of our approach is the unfolding of a Markov chain Monte Carlo (MCMC) algorithm - specifically, the recently proposed LATINO Langevin sampler (Spagnoletti et al., 2025) - representing the first known instance of deep unfolding applied to a Monte Carlo sampling scheme. We demonstrate our proposed unfolded and distilled samplers through extensive experiments and comparisons with the state of the art, where they achieve excellent accuracy and computational efficiency, while retaining the flexibility to adapt to variations in the forward model at inference time.