🤖 AI Summary
This paper studies active learning for monotonic classification: learning a monotonic classifier in multidimensional space with minimal label queries, while ensuring its error rate is at most (1+ε) times that of the optimal monotonic classifier (ε ≥ 0). Departing from conventional absolute-error frameworks, we introduce— for the first time—the relative-error approximation model, overcoming fundamental limitations of prior theory. Our approach integrates combinatorial game-theoretic analysis, structural characterization of monotone Boolean functions, and adaptive query strategy design. This yields nearly tight upper and lower bounds on query complexity across the full range of ε. The resulting complexity is provably minimal for achieving optimal relative approximation accuracy, and the bounds are asymptotically tight. Our work establishes a new theoretical benchmark for monotonic learning, unifying approximation guarantees with query efficiency in active learning.
📝 Abstract
In monotone classification, the input is a multi-set $P$ of points in $mathbb{R}^d$, each associated with a hidden label from ${-1, 1}$. The goal is to identify a monotone function $h$, which acts as a classifier, mapping from $mathbb{R}^d$ to ${-1, 1}$ with a small {em error}, measured as the number of points $p in P$ whose labels differ from the function values $h(p)$. The cost of an algorithm is defined as the number of points having their labels revealed. This article presents the first study on the lowest cost required to find a monotone classifier whose error is at most $(1 + epsilon) cdot k^*$ where $epsilon ge 0$ and $k^*$ is the minimum error achieved by an optimal monotone classifier -- in other words, the error is allowed to exceed the optimal by at most a relative factor. Nearly matching upper and lower bounds are presented for the full range of $epsilon$. All previous work on the problem can only achieve an error higher than the optimal by an absolute factor.