🤖 AI Summary
This paper addresses the sublinear-time support recovery problem for sparse signals in one-bit compressed sensing (1bCS): given sign measurements $ y = mathrm{sign}(Ax) $, where $ x in mathbb{R}^n $ is $ k $-sparse ($ k ll n $), the goal is to recover $ mathrm{supp}(x) $ exactly or approximately in $ o(n) $ time. Existing methods typically require $ Omega(n) $ time, constituting a fundamental bottleneck. To overcome this, we propose two novel algorithms leveraging random binary sensing matrices and group-testing principles. The first achieves universal exact support recovery; the second yields an $ varepsilon $-approximate recovery (i.e., recovers a superset of the support with at most $ varepsilon k $ false positives). Their measurement complexities are $ O(k^2 log(n/k)log n) $ and $ O(kvarepsilon^{-1}log(n/k)log n) $, respectively. Both algorithms attain vanishing failure probability as parameters grow, thereby breaking the conventional time–measurement trade-off frontier in 1bCS.
📝 Abstract
The problem of support recovery in one-bit compressed sensing (1bCS) aim to recover the support of a signal $xin mathbb{R}^n$, denoted as supp$(x)$, from the observation $y= ext{sign}(Ax)$, where $Ain mathbb{R}^{m imes n}$ is a sensing matrix and $| ext{supp}(x)|leq k, k ll n$. Under this setting, most preexisting works have a recovery runtime $Omega(n)$. In this paper, we propose two schemes that have sublinear $o(n)$ runtime. (1.i): For the universal exact support recovery, a scheme of $m=O(k^2log(n/k)log n)$ measurements and runtime $D=O(km)$. (1.ii): For the universal $epsilon$-approximate support recovery, the same scheme with $m=O(kepsilon^{-1}log(n/k)log n)$ and runtime $D=O(epsilon^{-1}m)$, improving the runtime significantly with an extra $O(log n)$ factor in the number of measurements compared to the current optimal (Matsumoto et al., 2023). (2): For the probabilistic exact support recovery in the sublinear regime, a scheme of $m:=O(kfrac{log k}{loglog k}log n)$ measurements and runtime $O(m)$, with vanishing error probability, improving the recent result of Yang et al., 2025.