🤖 AI Summary
This work characterizes the minimal number of measurements—i.e., the sample complexity—required for stable and accurate recovery in Bayesian inverse problems, under arbitrary priors, forward operators, and noise models. We develop a unified analytical framework that, for the first time, jointly models the approximate covering number and the concentration properties of the operator and noise as fundamental determinants of sample complexity. Our analysis reveals the intrinsic role of coherence in Bayesian recovery. We prove that, under deep generative priors, sample complexity scales only logarithmically-linearly with the latent dimension. The theory yields universal, non-asymptotic upper bounds: for DNN-based priors, it establishes an $O(k log n)$ measurement requirement, where $k$ is the latent dimension and $n$ the ambient signal dimension; moreover, it delivers the first rigorous Bayesian sample-size guarantees for critical imaging settings such as random orthogonal sampling.
📝 Abstract
We study the sample complexity of Bayesian recovery for solving inverse problems with general prior, forward operator and noise distributions. We consider posterior sampling according to an approximate prior $mathcal{P}$, and establish sufficient conditions for stable and accurate recovery with high probability. Our main result is a non-asymptotic bound that shows that the sample complexity depends on (i) the intrinsic complexity of $mathcal{P}$, quantified by its so-called approximate covering number, and (ii) concentration bounds for the forward operator and noise distributions. As a key application, we specialize to generative priors, where $mathcal{P}$ is the pushforward of a latent distribution via a Deep Neural Network (DNN). We show that the sample complexity scales log-linearly with the latent dimension $k$, thus establishing the efficacy of DNN-based priors. Generalizing existing results on deterministic (i.e., non-Bayesian) recovery for the important problem of random sampling with an orthogonal matrix $U$, we show how the sample complexity is determined by the coherence of $U$ with respect to the support of $mathcal{P}$. Hence, we establish that coherence plays a fundamental role in Bayesian recovery as well. Overall, our framework unifies and extends prior work, providing rigorous guarantees for the sample complexity of solving Bayesian inverse problems with arbitrary distributions.