Learning from the Best, Differently: A Diversity-Driven Rethinking on Data Selection

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing score-based data selection methods suffer from insufficient sample diversity due to strong correlations among multidimensional metrics, and exhibit a non-monotonic relationship between selection scores and downstream performance—high scores do not guarantee high efficacy. This stems from dimensional collapse across correlated evaluation axes. Method: We propose Orthogonal Diversity-aware Selection (ODiS), a principled framework comprising three stages: (1) constructing a tri-dimensional assessment system evaluating language quality, knowledge coverage, and comprehension difficulty; (2) applying PCA to orthogonally decouple these dimensions, reducing metric overlap to <2%; and (3) independently sampling high-quality data along each orthogonal axis. Contribution/Results: ODiS consistently outperforms state-of-the-art baselines across multiple downstream tasks. It is the first method to jointly and systematically ensure both data quality and diversity at the algorithmic level, establishing an interpretable, reproducible paradigm for large language model pretraining data selection.

Technology Category

Application Category

📝 Abstract
High-quality pre-training data is crutial for large language models, where quality captures factual reliability and semantic value, and diversity ensures broad coverage and distributional heterogeneity. Existing approaches typically rely on single or multiple-dimensional score-based selection. However, directly selecting top-scored data often degrades performance, and sampling from a broader range is required to recover results. The above non-monotonicity between dataset scores and downstream benchmark results reveals a fundamental bias: score-based methods collapse correlated dimensions, causing top-scored data to appear high-quality while systematically overlooking diversity. We argue that ensuring diversity requires decomposing correlated metrics into orthogonal feature dimensions, from which the top-scored data can be directly selected. Therefore, we proposed the Orthogonal Diversity-Aware Selection (ODiS) algorithm, which preserves both quality and diversity during data selection. First, ODiS evaluates data from multiple dimensions, covering language quality, knowledge quality, and comprehension difficulty. The multi-dimensional scores are then decorrelated via Principal Component Analysis (PCA), yielding orthogonal evaluation dimensions. For each dimension, a Roberta-based scorer is trained to regress the data onto PCA-projected scores, enabling scalable inference on large corpora. Finally, ODiS constructs the training dataset by selecting top-scored data within each orthogonal dimension, thereby ensuring both quality and diversity. Empirical results show that ODiS-selected data exhibit less than 2% inter-dimension overlap, confirming orthogonality between dimensions. More importantly, models trained with ODiS-selected data significantly outperform other baselines on downstream benchmarks, highlighting the necessity of orthogonal, diversity-aware data selection for LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addresses biased data selection in language model pretraining
Proposes orthogonal decomposition to preserve data diversity
Ensures quality and diversity through multi-dimensional scoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal Diversity-Aware Selection algorithm preserves quality and diversity
Multi-dimensional scores decorrelated via Principal Component Analysis
Top-scored data selected within each orthogonal dimension
🔎 Similar Papers
No similar papers found.
H
Hongyi He
Tsinghua University
X
Xiao Liu
Microsoft Research
Zhenghao Lin
Zhenghao Lin
MSRA
NLP
M
Mingni Tang
The Hong Kong Polytechnic University
Y
Yi Cheng
The Hong Kong Polytechnic University
Jintao Wang
Jintao Wang
University of Macau
6GRISISACWireless prototypeConvex optimization
W
Wenjie Li
The Hong Kong Polytechnic University
P
Peng Cheng
Microsoft Research
Yeyun Gong
Yeyun Gong
Microsoft Research Asia
Natural Language GenerationQuestion AnsweringPre-training