Large-Scale Data Selection for Instruction Tuning

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing data selection methods for large-scale instruction tuning suffer from performance degradation—or even underperform random sampling—when applied to million-scale data pools (up to 5.8M samples). To address this, we propose RDS+, a lightweight and efficient data selection method that constructs sample representations via weighted mean pooling over pretrained language model hidden states, avoiding complex optimization or additional parameters. Across multiple scales up to 2.5M samples, RDS+ consistently outperforms state-of-the-art methods and random baselines on seven multitask benchmarks, while incurring significantly lower computational overhead. Our experiments provide the first systematic evidence that most existing selection methods exhibit diminishing or negative returns as pool size increases—highlighting a long-overlooked scalability deficiency. RDS+ establishes a new paradigm for high-quality data selection in large-scale instruction tuning: efficient, robust, and inherently scalable.

Technology Category

Application Category

📝 Abstract
Selecting high-quality training data from a larger pool is a crucial step when instruction-tuning language models, as carefully curated datasets often produce models that outperform those trained on much larger, noisier datasets. Automated data selection approaches for instruction-tuning are typically tested by selecting small datasets (roughly 10k samples) from small pools (100-200k samples). However, popular deployed instruction-tuned models often train on hundreds of thousands to millions of samples, subsampled from even larger data pools. We present a systematic study of how well data selection methods scale to these settings, selecting up to 2.5M samples from pools of up to 5.8M samples and evaluating across 7 diverse tasks. We show that many recently proposed methods fall short of random selection in this setting (while using more compute), and even decline in performance when given access to larger pools of data to select over. However, we find that a variant of representation-based data selection (RDS+), which uses weighted mean pooling of pretrained LM hidden states, consistently outperforms more complex methods across all settings tested -- all whilst being more compute-efficient. Our findings highlight that the scaling properties of proposed automated selection methods should be more closely examined. We release our code, data, and models at https://github.com/hamishivi/automated-instruction-selection.
Problem

Research questions and friction points this paper is trying to address.

Scaling data selection methods for instruction-tuning large language models.
Evaluating performance of data selection methods on large datasets.
Identifying efficient and effective data selection techniques for model training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaled data selection to 2.5M samples
Used RDS+ for efficient data selection
Evaluated across 7 diverse tasks
🔎 Similar Papers