The Price of Sparsity: Sufficient Conditions for Sparse Recovery using Sparse and Sparsified Measurements

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the support recovery of sparse signals under noise using sparse measurement matrices. To characterize the trade-off between sample complexity and measurement sparsity, we introduce the notion of “sparsity cost,” quantifying the additional sample overhead induced by matrix sparsification. Leveraging probabilistic analysis, information-theoretic lower bounds, and random matrix theory—under standard regularity assumptions—we derive sharp phase-transition thresholds and sufficient sampling conditions for exact support recovery. Our theoretical results establish that when $ds/p o infty$, the information-theoretic limit is $n_{ ext{INF}}^{ ext{SP}} = Thetaig(s log(p/s) / log(ds/p)ig)$; when $s = alpha p$ and $d = psi p$, the required sample size scales as $Theta(p / psi^2)$. To our knowledge, this is the first work to precisely quantify how measurement sparsity fundamentally limits support recovery performance, providing a foundational theoretical benchmark for the design of sparse-sensing systems.

Technology Category

Application Category

📝 Abstract
We consider the problem of recovering the support of a sparse signal using noisy projections. While extensive work has been done on the dense measurement matrix setting, the sparse setting remains less explored. In this work, we establish sufficient conditions on the sample size for successful sparse recovery using sparse measurement matrices. Bringing together our result with previously known necessary conditions, we discover that, in the regime where $ds/p ightarrow +infty$, sparse recovery in the sparse setting exhibits a phase transition at an information-theoretic threshold of $n_{ ext{INF}}^{ ext{SP}} = Θleft(slogleft(p/s ight)/logleft(ds/p ight) ight)$, where $p$ denotes the signal dimension, $s$ the number of non-zero components of the signal, and $d$ the expected number of non-zero components per row of measurement. This expression makes the price of sparsity explicit: restricting each measurement to $d$ non-zeros inflates the required sample size by a factor of $log{s}/logleft(ds/p ight)$, revealing a precise trade-off between sampling complexity and measurement sparsity. Additionally, we examine the effect of sparsifying an originally dense measurement matrix on sparse signal recovery. We prove in the regime of $s = αp$ and $d = ψp$ with $α, ψin left(0,1 ight)$ and $ψ$ small that a sample of size $n^{ ext{Sp-ified}}_{ ext{INF}} = Θleft(p / ψ^2 ight)$ is sufficient for recovery, subject to a certain uniform integrability conjecture, the proof of which is work in progress.
Problem

Research questions and friction points this paper is trying to address.

Establishing sufficient conditions for sparse signal recovery using sparse measurements
Analyzing phase transition thresholds for sparse recovery with sparse matrices
Quantifying trade-offs between measurement sparsity and required sample size
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse measurement matrices for support recovery
Phase transition at information-theoretic threshold
Sparsifying dense matrices with sample complexity trade-off
🔎 Similar Papers
No similar papers found.
Y
Youssef Chaabouni
Operations Research Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
David Gamarnik
David Gamarnik
Professor of Operations Research, MIT
Applied ProbabilityRandom Graphs and Random structuresAlgorithmsStatistics and Machine LearningQueueing Theory