Variational Bayes Image Restoration With Compressive Autoencoders

πŸ“… 2023-11-29
πŸ›οΈ IEEE Transactions on Image Processing
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In computational imaging, image restoration faces key bottlenecks: heavy reliance on training data, high model complexity, and low efficiency in Bayesian optimization. To address these, this paper proposes a Variational Bayesian Latent Estimation (VBLE) framework built upon a compressed autoencoder. Its core innovation lies in the first integration of a lightweight, flexible-prior variational autoencoder (VAE) as the compressed autoencoder within Bayesian inverse problem solving, coupled with an analytically tractable variational posterior parameterization for efficient uncertainty quantification. Furthermore, the framework is designed to be Plug-and-Play (PnP) compatible. Evaluated on BSD and FFHQ datasets, VBLE achieves restoration accuracy comparable to state-of-the-art PnP methods, while accelerating posterior sampling by one to two orders of magnitude and enabling real-time, pixel-wise uncertainty estimation.
πŸ“ Abstract
Regularization of inverse problems is of paramount importance in computational imaging. The ability of neural networks to learn efficient image representations has been recently exploited to design powerful data-driven regularizers. While state-of-the-art plug-and-play (PnP) methods rely on an implicit regularization provided by neural denoisers, alternative Bayesian approaches consider Maximum A Posteriori (MAP) estimation in the latent space of a generative model, thus with an explicit regularization. However, state-of-the-art deep generative models require a huge amount of training data compared to denoisers. Besides, their complexity hampers the optimization involved in latent MAP derivation. In this work, we first propose to use compressive autoencoders instead. These networks, which can be seen as variational autoencoders with a flexible latent prior, are smaller and easier to train than state-of-the-art generative models. As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm, which performs latent estimation within the framework of variational inference. Thanks to a simple yet efficient parameterization of the variational posterior, VBLE allows for fast and easy (approximate) posterior sampling. Experimental results on image datasets BSD and FFHQ demonstrate that VBLE reaches similar performance as state-of-the-art PnP methods, while being able to quantify uncertainties significantly faster than other existing posterior sampling techniques. The code associated to this paper is available in https://github.com/MaudBqrd/VBLE
Problem

Research questions and friction points this paper is trying to address.

Improving image restoration with efficient compressive autoencoders
Addressing high training data needs of deep generative models
Enabling faster uncertainty quantification in inverse problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compressive autoencoders for flexible latent prior
Variational Bayes Latent Estimation algorithm
Fast approximate posterior sampling for uncertainties
πŸ”Ž Similar Papers
No similar papers found.
M
Maud Biquard
ISAE-Supaero / CNES, 31400 Toulouse, France
F
Florence Genin
CNES, 31400 Toulouse, France
C
Christophe Latry
CNES, 31400 Toulouse, France
Marie Chabert
Marie Chabert
Professor, University of Toulouse, IRIT/INP-ENSEEIHT
Signal processingimage processing
Thomas Oberlin
Thomas Oberlin
ISAE-SUPAERO, UniversitΓ© de Toulouse
Signal and image processingmachine learning