Combatting Dimensional Collapse in LLM Pre-Training Data via Diversified File Selection

📅 2025-04-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address feature-space dimensional collapse and degraded generalization in large language model (LLM) pretraining caused by domain-similarity–based data filtering, this paper proposes a document-level data diversity optimization method. We introduce DiSF, the first algorithm that explicitly maximizes spectral uniformity of the feature covariance matrix, grounded in γ-weak submodularity theory to design a greedy selection strategy. A lightweight surrogate model extracts text representations, and empirical evaluation is conducted on the SlimPajama dataset using the TinyLlama architecture. Remarkably, training on only 1.5% of the data surpasses full-dataset performance. On the Harness nine-task benchmark, DiSF achieves 1.5× training speedup and 5× improvement in data efficiency. This work pioneers the integration of spectral均衡ness-driven data selection and weak submodularity theory into LLM pretraining data curation—establishing a principled, theoretically grounded framework for diversity-aware data filtering.

Technology Category

Application Category

📝 Abstract
Selecting high-quality pre-training data for large language models (LLMs) is crucial for enhancing their overall performance under limited computation budget, improving both training and sample efficiency. Recent advancements in file selection primarily rely on using an existing or trained proxy model to assess the similarity of samples to a target domain, such as high quality sources BookCorpus and Wikipedia. However, upon revisiting these methods, the domain-similarity selection criteria demonstrates a diversity dilemma, i.e.dimensional collapse in the feature space, improving performance on the domain-related tasks but causing severe degradation on generic performance. To prevent collapse and enhance diversity, we propose a DiverSified File selection algorithm (DiSF), which selects the most decorrelated text files in the feature space. We approach this with a classical greedy algorithm to achieve more uniform eigenvalues in the feature covariance matrix of the selected texts, analyzing its approximation to the optimal solution under a formulation of $gamma$-weakly submodular optimization problem. Empirically, we establish a benchmark and conduct extensive experiments on the TinyLlama architecture with models from 120M to 1.1B parameters. Evaluating across nine tasks from the Harness framework, DiSF demonstrates a significant improvement on overall performance. Specifically, DiSF saves 98.5% of 590M training files in SlimPajama, outperforming the full-data pre-training within a 50B training budget, and achieving about 1.5x training efficiency and 5x data efficiency.
Problem

Research questions and friction points this paper is trying to address.

Preventing dimensional collapse in LLM pre-training data selection
Enhancing diversity in file selection for better model performance
Improving training and data efficiency under limited computation budget
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diversified file selection to prevent dimensional collapse
Greedy algorithm for uniform feature covariance eigenvalues
Improves training and data efficiency significantly
🔎 Similar Papers
No similar papers found.