Optimal Quantized Compressed Sensing via Projected Gradient Descent

📅 2024-07-06
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses efficient recovery of star-shaped signals—such as sparse or effectively sparse signals—from L-level quantized measurements with random thresholds. We propose a projected gradient descent (PGD) algorithm based on a one-sided ℓ₁ loss, unifying 1-bit and multi-bit quantized compressed sensing. Under broad conditions—including sub-Gaussian sensing matrices and general dithering models—we establish, for the first time, that this PGD algorithm achieves information-theoretically optimal error rates: Õ(k/(mL)) convergence for k-sparse signals and Õ((k/(mL))^{1/3}) for effectively sparse signals; in the 1-bit case, it recovers the optimality of NBIHT. Technically, we introduce a novel analytical framework combining separation probability and small-ball probability, and design a product embedding technique tailored for multi-bit quantization to ensure global convergence.

Technology Category

Application Category

📝 Abstract
This paper provides a unified treatment to the recovery of structured signals living in a star-shaped set from general quantized measurements $mathcal{Q}(mathbf{A}mathbf{x}-mathbf{ au})$, where $mathbf{A}$ is a sensing matrix, $mathbf{ au}$ is a vector of (possibly random) quantization thresholds, and $mathcal{Q}$ denotes an $L$-level quantizer. The ideal estimator with consistent quantized measurements is optimal in some important instances but typically infeasible to compute. To this end, we study the projected gradient descent (PGD) algorithm with respect to the one-sided $ell_1$-loss and identify the conditions under which PGD achieves the same error rate, up to logarithmic factors. These conditions include estimates of the separation probability, small-ball probability and some moment bounds that are easy to validate. For multi-bit case, we also develop a complementary approach based on product embedding to show global convergence. When applied to popular models such as 1-bit compressed sensing with Gaussian $mathbf{A}$ and zero $mathbf{ au}$ and the dithered 1-bit/multi-bit models with sub-Gaussian $mathbf{A}$ and uniform dither $mathbf{ au}$, our unified treatment yields error rates that improve on or match the sharpest results in all instances. Particularly, PGD achieves the information-theoretic optimal rate $ ilde{O}(frac{k}{mL})$ for recovering $k$-sparse signals, and the rate $ ilde{O}((frac{k}{mL})^{1/3})$ for effectively sparse signals. For 1-bit compressed sensing of sparse signals, our result recovers the optimality of normalized binary iterative hard thresholding (NBIHT) that was proved very recently.
Problem

Research questions and friction points this paper is trying to address.

Recover structured signals from quantized measurements
Analyze projected gradient descent for optimal error rates
Improve error rates in compressed sensing models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Projected Gradient Descent for quantized recovery
Product embedding for multi-bit convergence
Unified treatment improves error rates
🔎 Similar Papers
No similar papers found.