🤖 AI Summary
This work addresses inverse problems in Bayesian imaging, where efficient sampling from posterior distributions governed by complex priors—particularly Product-of-Experts (PoE) models—remains challenging. We propose the Gaussian Latent Variable Machine (GLVM) framework, the first to reformulate PoE models as analytically tractable latent-variable structures. This enables both generic two-block Gibbs sampling and, under specific conditions, direct posterior sampling. By unifying latent-variable modeling, probabilistic graphical structure, and Bayesian inversion, GLVM significantly improves MCMC convergence speed and sampling accuracy. In diverse imaging tasks—including denoising, deblurring, and tomographic reconstruction—GLVM reduces posterior variance and accelerates convergence relative to state-of-the-art methods (e.g., stochastic gradient Langevin dynamics, proximal MCMC). The framework provides a new paradigm for Bayesian imaging that balances theoretical rigor with computational efficiency, offering principled, scalable inference for complex hierarchical priors.
📝 Abstract
We consider the problem of sampling from a product-of-experts-type model that encompasses many standard prior and posterior distributions commonly found in Bayesian imaging. We show that this model can be easily lifted into a novel latent variable model, which we refer to as a Gaussian latent machine. This leads to a general sampling approach that unifies and generalizes many existing sampling algorithms in the literature. Most notably, it yields a highly efficient and effective two-block Gibbs sampling approach in the general case, while also specializing to direct sampling algorithms in particular cases. Finally, we present detailed numerical experiments that demonstrate the efficiency and effectiveness of our proposed sampling approach across a wide range of prior and posterior sampling problems from Bayesian imaging.