🤖 AI Summary
This paper investigates optimal sampling for denoising in compressed sensing using subsampled unitary matrices. Addressing the fundamental question of whether reconstruction error converges to zero as the number of measurements increases under Gaussian noise, we establish the first rigorous asymptotic denoising theory: when the prior set lies in and concentrates on a low-dimensional subspace, reconstruction error decays at rate $1/m$ under optimal probability-weighted (with-replacement) sampling. This guarantee unifies practical priors—including sparse vectors and ReLU-based generative models. Experiments confirm that the theoretical $1/m$ decay rate closely matches empirical performance for both priors and significantly outperforms uniform sampling. Our work provides the first asymptotically grounded theoretical foundation for optimized sampling in compressed sensing and extends the analysis to the weighted sampling-with-replacement framework.
📝 Abstract
Compressed sensing with subsampled unitary matrices benefits from emph{optimized} sampling schemes, which feature improved theoretical guarantees and empirical performance relative to uniform subsampling. We provide, in a first of its kind in compressed sensing, theoretical guarantees showing that the error caused by the measurement noise vanishes with an increasing number of measurements for optimized sampling schemes, assuming that the noise is Gaussian. We moreover provide similar guarantees for measurements sampled with-replacement with arbitrary probability weights. All our results hold on prior sets contained in a union of low-dimensional subspaces. Finally, we demonstrate that this denoising behavior appears in empirical experiments with a rate that closely matches our theoretical guarantees when the prior set is the range of a generative ReLU neural network and when it is the set of sparse vectors.