π€ AI Summary
In computational imaging, image restoration faces key bottlenecks: heavy reliance on training data, high model complexity, and low efficiency in Bayesian optimization. To address these, this paper proposes a Variational Bayesian Latent Estimation (VBLE) framework built upon a compressed autoencoder. Its core innovation lies in the first integration of a lightweight, flexible-prior variational autoencoder (VAE) as the compressed autoencoder within Bayesian inverse problem solving, coupled with an analytically tractable variational posterior parameterization for efficient uncertainty quantification. Furthermore, the framework is designed to be Plug-and-Play (PnP) compatible. Evaluated on BSD and FFHQ datasets, VBLE achieves restoration accuracy comparable to state-of-the-art PnP methods, while accelerating posterior sampling by one to two orders of magnitude and enabling real-time, pixel-wise uncertainty estimation.
π Abstract
Regularization of inverse problems is of paramount importance in computational imaging. The ability of neural networks to learn efficient image representations has been recently exploited to design powerful data-driven regularizers. While state-of-the-art plug-and-play (PnP) methods rely on an implicit regularization provided by neural denoisers, alternative Bayesian approaches consider Maximum A Posteriori (MAP) estimation in the latent space of a generative model, thus with an explicit regularization. However, state-of-the-art deep generative models require a huge amount of training data compared to denoisers. Besides, their complexity hampers the optimization involved in latent MAP derivation. In this work, we first propose to use compressive autoencoders instead. These networks, which can be seen as variational autoencoders with a flexible latent prior, are smaller and easier to train than state-of-the-art generative models. As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm, which performs latent estimation within the framework of variational inference. Thanks to a simple yet efficient parameterization of the variational posterior, VBLE allows for fast and easy (approximate) posterior sampling. Experimental results on image datasets BSD and FFHQ demonstrate that VBLE reaches similar performance as state-of-the-art PnP methods, while being able to quantify uncertainties significantly faster than other existing posterior sampling techniques. The code associated to this paper is available in https://github.com/MaudBqrd/VBLE