🤖 AI Summary
This paper addresses the core-set selection problem: efficiently identifying a small, representative subset from large-scale data to support model training and compression. To overcome limitations of existing paradigms—namely, their fragmentation (training-agnostic vs. training-aware vs. label-free), lack of unifying frameworks, and poor adaptability to foundation models—the authors propose the first unified taxonomy encompassing all three paradigms, integrating submodular optimization, bilevel optimization, and pseudo-labeling techniques. They theoretically and empirically analyze how data pruning affects generalization performance and neural scaling laws. A cross-method evaluation framework is established, characterizing trade-offs among accuracy, computational cost, and robustness. Finally, the paper systematically outlines a core-set adaptation pipeline tailored for fine-tuning foundation models. Collectively, these contributions provide both theoretical foundations and practical guidelines for efficient data compression and foundation model lightweighting.
📝 Abstract
Coreset selection targets the challenge of finding a small, representative subset of a large dataset that preserves essential patterns for effective machine learning. Although several surveys have examined data reduction strategies before, most focus narrowly on either classical geometry-based methods or active learning techniques. In contrast, this survey presents a more comprehensive view by unifying three major lines of coreset research, namely, training-free, training-oriented, and label-free approaches, into a single taxonomy. We present subfields often overlooked by existing work, including submodular formulations, bilevel optimization, and recent progress in pseudo-labeling for unlabeled datasets. Additionally, we examine how pruning strategies influence generalization and neural scaling laws, offering new insights that are absent from prior reviews. Finally, we compare these methods under varying computational, robustness, and performance demands and highlight open challenges, such as robustness, outlier filtering, and adapting coreset selection to foundation models, for future research.