PEAKS: Selecting Key Training Examples Incrementally via Prediction Error Anchored by Kernel Similarity

๐Ÿ“… 2025-04-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In data stream settings where full historical data is inaccessible and a fixed-size, high-quality training set must be dynamically maintained, Incremental Data Selection (IDS) faces the challenge of identifying valuable samples under strict memory constraints. Method: This paper introduces the first framework revealing the joint influence of prediction error and kernel similarity on sample incremental utility, proposing an โ€œerror-anchored similarityโ€ co-evaluation paradigm. We construct a geometric feature-space model based on kernel similarity, integrated with online error estimation and a lightweight incremental priority-ranking algorithm to jointly optimize sample selection and model learning. Results: Extensive experiments on multiple real-world datasets demonstrate that our method significantly outperforms state-of-the-art IDS approaches. Moreover, its performance gain over random selection consistently increases with training set size, confirming superior scalability and robustness under evolving data distributions.

Technology Category

Application Category

๐Ÿ“ Abstract
As deep learning continues to be driven by ever-larger datasets, understanding which examples are most important for generalization has become a critical question. While progress in data selection continues, emerging applications require studying this problem in dynamic contexts. To bridge this gap, we pose the Incremental Data Selection (IDS) problem, where examples arrive as a continuous stream, and need to be selected without access to the full data source. In this setting, the learner must incrementally build a training dataset of predefined size while simultaneously learning the underlying task. We find that in IDS, the impact of a new sample on the model state depends fundamentally on both its geometric relationship in the feature space and its prediction error. Leveraging this insight, we propose PEAKS (Prediction Error Anchored by Kernel Similarity), an efficient data selection method tailored for IDS. Our comprehensive evaluations demonstrate that PEAKS consistently outperforms existing selection strategies. Furthermore, PEAKS yields increasingly better performance returns than random selection as training data size grows on real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Identify key training examples for generalization in dynamic data streams
Select incremental training data without full dataset access
Balance geometric relationships and prediction errors for optimal selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incremental data selection via dynamic streaming
Kernel similarity anchors prediction error
Efficient selection outperforms random strategies
๐Ÿ”Ž Similar Papers
No similar papers found.
M
Mustafa Burak Gurbuz
School of Computer Science, Georgia Institute of Technology, USA
X
Xingyu Zheng
Cold Spring Harbor Laboratory, USA
Constantine Dovrolis
Constantine Dovrolis
Professor of Computer Science, Georgia Tech
Neuro-inspired Machine LearningNetwork ScienceComputational Science