🤖 AI Summary
This work investigates the posterior error induced by generative priors in Bayesian inverse problems. Under specific assumptions, it establishes the first quantitative error bound for minimum Wasserstein-2 generative models and demonstrates that the posterior distribution inherits the convergence rate of the generative prior in Wasserstein-1 distance. The theoretical analysis integrates tools from Wasserstein metrics, approximation theory for generative models, and the framework of Bayesian inverse problems. Numerical experiments on benchmark inverse problems—such as those governed by elliptic PDEs—confirm that the observed posterior error behavior aligns with theoretical predictions, highlighting the critical role of generative prior accuracy in determining the quality of posterior inference.
📝 Abstract
Data-driven methods for the solution of inverse problems have become widely popular in recent years thanks to the rise of machine learning techniques. A popular approach concerns the training of a generative model on additional data to learn a bespoke prior for the problem at hand. In this article we present an analysis for such problems by presenting quantitative error bounds for minimum Wasserstein-2 generative models for the prior. We show that under some assumptions, the error in the posterior due to the generative prior will inherit the same rate as the prior with respect to the Wasserstein-1 distance. We further present numerical experiments that verify that aspects of our error analysis manifests in some benchmarks followed by an elliptic PDE inverse problem where a generative prior is used to model a non-stationary field.