Rethinking Representativeness and Diversity in Dynamic Data Selection

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively balancing sample representativeness and diversity in dynamic data selection to accelerate training while preserving model accuracy. The authors propose a novel framework that defines representativeness as coverage of high-frequency feature factors in the dataset, while diversity is achieved by progressively introducing complementary rare factors during training. Leveraging sparse autoencoders, the method identifies sparsely activated units in the feature space and estimates both sample-level and dataset-level factor distributions. A frequency-based penalty mechanism combined with a smooth scheduling strategy enables efficient, gradient-free data selection. Evaluated across five vision and language benchmarks, the approach matches or exceeds the performance of full-data training while achieving over 2× speedup in training time.

Technology Category

Application Category

📝 Abstract
Dynamic data selection accelerates training by sampling a changing subset of the dataset while preserving accuracy. We rethink two core notions underlying sample evaluation: representativeness and diversity. Instead of local geometric centrality, we define representativeness as coverage of dataset-level common or high-frequency feature factors. Instead of within-subset dispersion, we define diversity at the process level, requiring the selection trajectory to gradually include complementary rare factors over training. Based on this view, we propose a dynamic selection framework with three components. First, we score representativeness in a plug-in feature space to prioritize samples covering frequent factors. We instantiate this with a sparse autoencoder trained on the target dataset, using sparse unit activations to summarize both individual samples and dataset-wide factor statistics. Second, we realize process-level diversity by combining rare-factor sampling with a Usage-Frequency Penalty that promotes sample rotation, provably discourages monopoly, and reduces gradient bias. Third, we couple the two-dimensional scoring with a smooth scheduler that transitions selection from core-pattern consolidation to rare-factor exploration, without extra gradients, influence estimates, or second-order computations on the training model. Extensive experiments on five benchmarks across vision and text tasks demonstrate improved accuracy-efficiency trade-offs across models. Our method matches or exceeds full-data accuracy with over 2x training acceleration. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

dynamic data selection
representativeness
diversity
training acceleration
sample evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic data selection
representativeness
diversity
sparse autoencoder
usage-frequency penalty
🔎 Similar Papers
No similar papers found.
Y
Yuzhe Zhou
Southeast University, Nanjing, China
Z
Zhenglin Hua
Southeast University, Nanjing, China
Haiyun Guo
Haiyun Guo
Rice University ECE Ph.D.
optical imagingcomputational photographyMetalens
Y
Yuheng Jia
Southeast University, Nanjing, China