🤖 AI Summary
Conventional convex optimization methods (e.g., Basis Pursuit) for high-dimensional compressed sensing suffer from suboptimal signal recovery when the ambient dimension (B) is large. Method: This paper proposes SteinSense—a lightweight, parameter-free, training-free, and sparsity-agnostic iterative algorithm—by pioneering the integration of James–Stein shrinkage theory into the Approximate Message Passing (AMP) framework, yielding a theoretically grounded iterative thresholding mechanism. Contribution/Results: We prove that SteinSense asymptotically achieves the minimum mean squared error (MMSE) bound as (B o infty). Empirically, it significantly outperforms convex methods such as Basis Pursuit across extensive synthetic and real-world datasets. Moreover, SteinSense exhibits strong robustness to model mismatch and distribution shift, along with favorable scalability.
📝 Abstract
The trend in modern science and technology is to take vector measurements rather than scalars, ruthlessly scaling to ever higher dimensional vectors. For about two decades now, traditional scalar Compressed Sensing has been synonymous with a Convex Optimization based procedure called Basis Pursuit. In the vector recovery case, the natural tendency is to return to a straightforward vector extension of Basis Pursuit, also based on Convex Optimization. However, Convex Optimization is provably suboptimal, particularly when $B$ is large. In this paper, we propose SteinSense, a lightweight iterative algorithm, which is provably optimal when $B$ is large. It does not have any tuning parameter, does not need any training data, requires zero knowledge of sparsity, is embarrassingly simple to implement, and all of this makes it easily scalable to high vector dimensions. We conduct a massive volume of both real and synthetic experiments that confirm the efficacy of SteinSense, and also provide theoretical justification based on ideas from Approximate Message Passing. Fascinatingly, we discover that SteinSense is quite robust, delivering the same quality of performance on real data, and even under substantial departures from conditions under which existing theory holds.