Group-wise oracle-efficient algorithms for online multi-group learning

📅 2024-06-07
🏛️ Neural Information Processing Systems
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies online multi-group learning, where groups correspond to implicit subsets of the context space—such as overlapping subpopulations defined by complex demographic attributes—and are too numerous to enumerate explicitly. The goal is to achieve sublinear group-level regret. We propose the first oracle-efficient online algorithm that relies solely on an optimization oracle—without explicit group enumeration—and applies to i.i.d., smoothed adversarial, and adversarial transductive settings. Leveraging a group-aware regret decomposition, smoothed analysis, and a transductive learning framework, we establish sublinear $O(sqrt{T})$ regret bounds simultaneously for the overall policy and every individual group—significantly improving upon prior enumeration-based approaches. The algorithm’s time complexity is polynomial in the number of oracle calls, ensuring practical scalability.

Technology Category

Application Category

📝 Abstract
We study the problem of online multi-group learning, a learning model in which an online learner must simultaneously achieve small prediction regret on a large collection of (possibly overlapping) subsequences corresponding to a family of groups. Groups are subsets of the context space, and in fairness applications, they may correspond to subpopulations defined by expressive functions of demographic attributes. In contrast to previous work on this learning model, we consider scenarios in which the family of groups is too large to explicitly enumerate, and hence we seek algorithms that only access groups via an optimization oracle. In this paper, we design such oracle-efficient algorithms with sublinear regret under a variety of settings, including: (i) the i.i.d. setting, (ii) the adversarial setting with smoothed context distributions, and (iii) the adversarial transductive setting.
Problem

Research questions and friction points this paper is trying to address.

Online multi-group learning with large group families
Achieving sublinear regret via oracle-efficient algorithms
Handling overlapping groups without explicit enumeration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Oracle-efficient algorithms for large group families
Sublinear regret in i.i.d. and adversarial settings
Optimization oracle access without explicit enumeration
🔎 Similar Papers
No similar papers found.
S
Samuel Deng
Department of Computer Science, Columbia University, New York, NY, USA
Daniel Hsu
Daniel Hsu
Columbia University
Algorithmic statisticslearning theorymachine learning
J
Jingwen Liu
Department of Computer Science, Columbia University, New York, NY, USA