A Coreset Selection of Coreset Selection Literature: Introduction and Recent Advances

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the core-set selection problem: efficiently identifying a small, representative subset from large-scale data to support model training and compression. To overcome limitations of existing paradigms—namely, their fragmentation (training-agnostic vs. training-aware vs. label-free), lack of unifying frameworks, and poor adaptability to foundation models—the authors propose the first unified taxonomy encompassing all three paradigms, integrating submodular optimization, bilevel optimization, and pseudo-labeling techniques. They theoretically and empirically analyze how data pruning affects generalization performance and neural scaling laws. A cross-method evaluation framework is established, characterizing trade-offs among accuracy, computational cost, and robustness. Finally, the paper systematically outlines a core-set adaptation pipeline tailored for fine-tuning foundation models. Collectively, these contributions provide both theoretical foundations and practical guidelines for efficient data compression and foundation model lightweighting.

Technology Category

Application Category

📝 Abstract
Coreset selection targets the challenge of finding a small, representative subset of a large dataset that preserves essential patterns for effective machine learning. Although several surveys have examined data reduction strategies before, most focus narrowly on either classical geometry-based methods or active learning techniques. In contrast, this survey presents a more comprehensive view by unifying three major lines of coreset research, namely, training-free, training-oriented, and label-free approaches, into a single taxonomy. We present subfields often overlooked by existing work, including submodular formulations, bilevel optimization, and recent progress in pseudo-labeling for unlabeled datasets. Additionally, we examine how pruning strategies influence generalization and neural scaling laws, offering new insights that are absent from prior reviews. Finally, we compare these methods under varying computational, robustness, and performance demands and highlight open challenges, such as robustness, outlier filtering, and adapting coreset selection to foundation models, for future research.
Problem

Research questions and friction points this paper is trying to address.

Finding small representative subsets of large datasets for machine learning
Unifying diverse coreset research approaches into a single taxonomy
Addressing robustness and adaptation challenges in coreset selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies training-free, training-oriented, label-free coreset approaches
Explores submodular, bilevel optimization, pseudo-labeling techniques
Analyzes pruning impact on generalization, neural scaling laws
🔎 Similar Papers
No similar papers found.