🤖 AI Summary
This work addresses the challenges of high sample complexity and limited generalization in multi-group learning by proposing the first algorithm that achieves a tight upper bound on sample complexity under the group realizability assumption. Built upon the single-inclusion graph prediction framework and incorporating generalized bipartite b-matching techniques, the method effectively aligns learning objectives across diverse groups. Theoretical analysis demonstrates that the algorithm attains the optimal convergence rate of $\log n / n$ in the general setting. Moreover, when group selection is independent of the samples, it further achieves the optimal $1/n$ convergence rate, thereby establishing its theoretical superiority.
📝 Abstract
We prove the tightest-known upper bounds on the sample complexity of multi-group learning. Our algorithm extends the one-inclusion graph prediction strategy using a generalization of bipartite $b$-matching. In the group-realizable setting, we provide a lower bound confirming that our algorithm's $\log n / n$ convergence rate is optimal in general. If one relaxes the learning objective such that the group on which we are evaluated is chosen obliviously of the sample, then our algorithm achieves the optimal $1/n$ convergence rate under group-realizability.