Private Learning of Littlestone Classes, Revisited

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates online and PAC learning of Littlestone classes under approximate differential privacy ((ε,δ)-DP). To address the high sample complexity and loose error bounds of prior approaches, we propose a private sparse selection framework grounded in a novel interpretation of “irreducibility”: it integrates a sparsified variant of the exponential mechanism to enable precise distributional control over a strongly input-dependent candidate pool. Theoretically, in the realizable setting, our online learner achieves an error bound of Õ(d⁹·⁵ log T), while its PAC sample complexity is bounded by Õ(d⁵ log(1/δβ)/(εα)), attaining optimal dependence on the privacy parameter α and nearly matching the information-theoretic lower bound on error. Compared to previous work, our method improves sample efficiency by a double-exponential factor and—crucially—achieves significant, unified improvements in the privacy–utility trade-off for both online and PAC learning paradigms.

Technology Category

Application Category

📝 Abstract
We consider online and PAC learning of Littlestone classes subject to the constraint of approximate differential privacy. Our main result is a private learner to online-learn a Littlestone class with a mistake bound of $ ilde{O}(d^{9.5}cdot log(T))$ in the realizable case, where $d$ denotes the Littlestone dimension and $T$ the time horizon. This is a doubly-exponential improvement over the state-of-the-art [GL'21] and comes polynomially close to the lower bound for this task. The advancement is made possible by a couple of ingredients. The first is a clean and refined interpretation of the ``irreducibility'' technique from the state-of-the-art private PAC-learner for Littlestone classes [GGKM'21]. Our new perspective also allows us to improve the PAC-learner of [GGKM'21] and give a sample complexity upper bound of $widetilde{O}(frac{d^5 log(1/δβ)}{varepsilon α})$ where $α$ and $β$ denote the accuracy and confidence of the PAC learner, respectively. This improves over [GGKM'21] by factors of $frac{d}α$ and attains an optimal dependence on $α$. Our algorithm uses a private sparse selection algorithm to emph{sample} from a pool of strongly input-dependent candidates. However, unlike most previous uses of sparse selection algorithms, where one only cares about the utility of output, our algorithm requires understanding and manipulating the actual distribution from which an output is drawn. In the proof, we use a sparse version of the Exponential Mechanism from [GKM'21] which behaves nicely under our framework and is amenable to a very easy utility proof.
Problem

Research questions and friction points this paper is trying to address.

Private online learning of Littlestone classes with differential privacy
Improving mistake bounds for private Littlestone class learning
Developing private PAC learners with optimal sample complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Private sparse selection algorithm for sampling candidates
Refined irreducibility technique from prior PAC-learner
Sparse Exponential Mechanism enabling distribution manipulation
🔎 Similar Papers
No similar papers found.