🤖 AI Summary
This paper studies active learning of halfspaces in the data-free synthesis setting, focusing on the case where the normal vector is drawn from a finite set of size $D$. To overcome the $Omega(n)$ query lower bound that constrains classical approaches, we propose the first optimal algorithm that does not rely on point synthesis: a ranking-based parallel binary search framework operating over $D$ total orders, achieving $Theta(D + log n)$ query complexity—breaking the prior lower bound. We further extend this strategy to the PAC learning framework, enabling robustness against adversarial noise; it attains error $varepsilon$ with only $Oig(min(D + log(1/varepsilon),, 1/varepsilon) cdot log Dig)$ queries, approaching the theoretical optimum. This work provides the first systematic resolution of the learning gap for axis-aligned halfspaces under the no-synthesis assumption.
📝 Abstract
In the classic point location problem, one is given an arbitrary dataset $X subset mathbb{R}^d$ of $n$ points with query access to an unknown halfspace $f : mathbb{R}^d o {0,1}$, and the goal is to learn the label of every point in $X$. This problem is extremely well-studied and a nearly-optimal $widetilde{O}(d log n)$ query algorithm is known due to Hopkins-Kane-Lovett-Mahajan (FOCS 2020). However, their algorithm is granted the power to query arbitrary points outside of $X$ (point synthesis), and in fact without this power there is an $Ω(n)$ query lower bound due to Dasgupta (NeurIPS 2004).
In this work our goal is to design efficient algorithms for learning halfspaces without point synthesis. To circumvent the $Ω(n)$ lower bound, we consider learning halfspaces whose normal vectors come from a set of size $D$, and show tight bounds of $Θ(D + log n)$. As a corollary, we obtain an optimal $O(d + log n)$ query deterministic learner for axis-aligned halfspaces, closing a previous gap of $O(d log n)$ vs. $Ω(d + log n)$. In fact, our algorithm solves the more general problem of learning a Boolean function $f$ over $n$ elements which is monotone under at least one of $D$ provided orderings. Our technical insight is to exploit the structure in these orderings to perform a binary search in parallel rather than considering each ordering sequentially, and we believe our approach may be of broader interest.
Furthermore, we use our exact learning algorithm to obtain nearly optimal algorithms for PAC-learning. We show that $O(min(D + log(1/varepsilon), 1/varepsilon) cdot log D)$ queries suffice to learn $f$ within error $varepsilon$, even in a setting when $f$ can be adversarially corrupted on a $cvarepsilon$-fraction of points, for a sufficiently small constant $c$. This bound is optimal up to a $log D$ factor, including in the realizable setting.