🤖 AI Summary
This paper addresses the problem of efficiently recovering the optimal $k$-sparse approximation of an $n$-dimensional vector, aiming for a $(1+varepsilon)$-approximation while minimizing runtime. We introduce a novel analytical framework based on **weighted hypergraph peeling**, generalizing classical hypergraph peeling to settings where both vertices and hyperedges carry weights—thereby significantly enhancing modeling capability for non-uniform structures. Coupled with a **non-adaptive linear sketch** having $O((k/varepsilon)log n)$ rows and $O(log n)$ column sparsity, our method achieves signal recovery in $O((k/varepsilon)log n)$ time. This improves upon the previous best runtime by a $log n$ factor and attains theoretically optimal time complexity across all parameters $k$, $varepsilon$, and $n$. Notably, it is the first algorithm to break the logarithmic-factor barrier for $(1+varepsilon)$-approximate sparse recovery.
📝 Abstract
We demonstrate that the best $k$-sparse approximation of a length-$n$ vector can be recovered within a $(1+ε)$-factor approximation in $O((k/ε) log n)$ time using a non-adaptive linear sketch with $O((k/ε) log n)$ rows and $O(log n)$ column sparsity. This improves the running time of the fastest-known sketch [Nakos, Song; STOC '19] by a factor of $log n$, and is optimal for a wide range of parameters.
Our algorithm is simple and likely to be practical, with the analysis built on a new technique we call weighted hypergraph peeling. Our method naturally extends known hypergraph peeling processes (as in the analysis of Invertible Bloom Filters) to a setting where edges and nodes have (possibly correlated) weights.