🤖 AI Summary
This paper addresses the ℓ₁-regularized least-squares problem in compressed sensing: $min_x frac{1}{2}|mathbf{A}x - b|^2 + eta |x|_1$. To tackle its high-dimensional sparse optimization challenges, we propose the Dynamic Working Set (DWS) method: at each iteration, DWS activates only a subset of variables whose size scales as $O((s/varepsilon)log s log(1/varepsilon))$, where $s$ is the true sparsity and $varepsilon$ the target accuracy; standard regression solvers are then applied to this low-dimensional working set. Theoretically, DWS is the first method proven to achieve an additive error guarantee of $varepsilon/eta^2$ while operating exclusively in $O(mathrm{polylog}(s,1/varepsilon))$ dimensions—crucially preserving solution sparsity throughout all iterations. Empirical results demonstrate that DWS converges faster and incurs significantly lower computational cost compared to state-of-the-art solvers.
📝 Abstract
We propose a dynamic working set method (DWS) for the problem $min_{mathtt{x} in mathbb{R}^n} frac{1}{2}|mathtt{Ax}-mathtt{b}|^2 + eta|mathtt{x}|_1$ that arises from compressed sensing. DWS manages the working set while iteratively calling a regression solver to generate progressively better solutions. Our experiments show that DWS is more efficient than other state-of-the-art software in the context of compressed sensing. Scale space such that $|b|=1$. Let $s$ be the number of non-zeros in the unknown signal. We prove that for any given $varepsilon>0$, DWS reaches a solution with an additive error $varepsilon/eta^2$ such that each call of the solver uses only $O(frac{1}{varepsilon}slog s logfrac{1}{varepsilon})$ variables, and each intermediate solution has $O(frac{1}{varepsilon}slog slogfrac{1}{varepsilon})$ non-zero coordinates.