Class-Proportional Coreset Selection for Difficulty-Separable Data

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing core subset selection methods commonly assume uniform class-wise difficulty, overlooking the pronounced inter-class difficulty separation prevalent in domains such as network intrusion detection and medical imaging. This work proposes a class-proportional coreset selection method, introducing and quantifying “Class Difficulty Separability Coefficient” (CDSC) for the first time; it reveals the performance degradation mechanism of conventional class-agnostic approaches under high-CDSC conditions. We then design a unified framework integrating dynamic difficulty-aware training assessment, Coverage-centric Coreset Selection (CCS), and class-proportional sampling to ensure balanced retention of hard instances. Evaluated on five cross-domain datasets, our method achieves 99% pruning on CTU-13 with only a 0.49% accuracy drop, demonstrating significantly superior data efficiency over state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
High-quality training data is essential for building reliable and efficient machine learning systems. One-shot coreset selection addresses this by pruning the dataset while maintaining or even improving model performance, often relying on training-dynamics-based data difficulty scores. However, most existing methods implicitly assume class-wise homogeneity in data difficulty, overlooking variation in data difficulty across different classes. In this work, we challenge this assumption by showing that, in domains such as network intrusion detection and medical imaging, data difficulty often clusters by class. We formalize this as class-difficulty separability and introduce the Class Difficulty Separability Coefficient (CDSC) as a quantitative measure. We demonstrate that high CDSC values correlate with performance degradation in class-agnostic coreset methods, which tend to overrepresent easy majority classes while neglecting rare but informative ones. To address this, we introduce class-proportional variants of multiple sampling strategies. Evaluated on five diverse datasets spanning security and medical domains, our methods consistently achieve state-of-the-art data efficiency. For instance, on CTU-13, at an extreme 99% pruning rate, a class-proportional variant of Coverage-centric Coreset Selection (CCS-CP) shows remarkable stability, with accuracy dropping only 2.58%, precision 0.49%, and recall 0.19%. In contrast, the class-agnostic CCS baseline, the next best method, suffers sharper declines of 7.59% in accuracy, 4.57% in precision, and 4.11% in recall. We further show that aggressive pruning enhances generalization in noisy, imbalanced, and large-scale datasets. Our results underscore that explicitly modeling class-difficulty separability leads to more effective, robust, and generalizable data pruning, particularly in high-stakes scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addresses class-wise data difficulty variation in coreset selection
Improves performance in imbalanced, noisy datasets via class-proportional pruning
Enhances generalization in high-stakes domains like medical imaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces class-proportional coreset selection methods
Measures class-difficulty separability via CDSC metric
Enhances pruning robustness in imbalanced noisy datasets