🤖 AI Summary
This paper studies optimal subset selection from a large dataset of size $N$ under fixed empirical risk minimization (ERM), given a limited budget $n ll N$, such that the subset’s training performance closely matches that of the full dataset. It establishes the first systematic theoretical framework for data selection under ERM, deriving tight selection bounds for mean estimation, linear classification, and regression, and characterizing error-rate regimes for binary classification and stochastic convex optimization. The methodology integrates learning-theoretic analysis, generalization bound derivation, data importance quantification, and optimal sampling strategy design. Key contributions include: (i) information-theoretically optimal selection bounds; (ii) proof that vanishingly small subsets suffice to achieve full-dataset performance across multiple ERM tasks; and (iii) characterization of the fundamental trade-off between selection efficiency and problem structure—specifically, loss curvature and data distribution properties.
📝 Abstract
Learning theory has traditionally followed a model-centric approach, focusing on designing optimal algorithms for a fixed natural learning task (e.g., linear classification or regression). In this paper, we adopt a complementary data-centric perspective, whereby we fix a natural learning rule and focus on optimizing the training data. Specifically, we study the following question: given a learning rule $mathcal{A}$ and a data selection budget $n$, how well can $mathcal{A}$ perform when trained on at most $n$ data points selected from a population of $N$ points? We investigate when it is possible to select $n ll N$ points and achieve performance comparable to training on the entire population. We address this question across a variety of empirical risk minimizers. Our results include optimal data-selection bounds for mean estimation, linear classification, and linear regression. Additionally, we establish two general results: a taxonomy of error rates in binary classification and in stochastic convex optimization. Finally, we propose several open questions and directions for future research.