🤖 AI Summary
Existing score-based data selection methods suffer from insufficient sample diversity due to strong correlations among multidimensional metrics, and exhibit a non-monotonic relationship between selection scores and downstream performance—high scores do not guarantee high efficacy. This stems from dimensional collapse across correlated evaluation axes.
Method: We propose Orthogonal Diversity-aware Selection (ODiS), a principled framework comprising three stages: (1) constructing a tri-dimensional assessment system evaluating language quality, knowledge coverage, and comprehension difficulty; (2) applying PCA to orthogonally decouple these dimensions, reducing metric overlap to <2%; and (3) independently sampling high-quality data along each orthogonal axis.
Contribution/Results: ODiS consistently outperforms state-of-the-art baselines across multiple downstream tasks. It is the first method to jointly and systematically ensure both data quality and diversity at the algorithmic level, establishing an interpretable, reproducible paradigm for large language model pretraining data selection.
📝 Abstract
High-quality pre-training data is crutial for large language models, where quality captures factual reliability and semantic value, and diversity ensures broad coverage and distributional heterogeneity. Existing approaches typically rely on single or multiple-dimensional score-based selection. However, directly selecting top-scored data often degrades performance, and sampling from a broader range is required to recover results. The above non-monotonicity between dataset scores and downstream benchmark results reveals a fundamental bias: score-based methods collapse correlated dimensions, causing top-scored data to appear high-quality while systematically overlooking diversity. We argue that ensuring diversity requires decomposing correlated metrics into orthogonal feature dimensions, from which the top-scored data can be directly selected. Therefore, we proposed the Orthogonal Diversity-Aware Selection (ODiS) algorithm, which preserves both quality and diversity during data selection. First, ODiS evaluates data from multiple dimensions, covering language quality, knowledge quality, and comprehension difficulty. The multi-dimensional scores are then decorrelated via Principal Component Analysis (PCA), yielding orthogonal evaluation dimensions. For each dimension, a Roberta-based scorer is trained to regress the data onto PCA-projected scores, enabling scalable inference on large corpora. Finally, ODiS constructs the training dataset by selecting top-scored data within each orthogonal dimension, thereby ensuring both quality and diversity. Empirical results show that ODiS-selected data exhibit less than 2% inter-dimension overlap, confirming orthogonality between dimensions. More importantly, models trained with ODiS-selected data significantly outperform other baselines on downstream benchmarks, highlighting the necessity of orthogonal, diversity-aware data selection for LLMs.