Data Efficacy for Language Model Training

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the optimization of *data effectiveness*—improving language model (LM) performance through principled data organization (scoring, selection, ordering) rather than mere data reduction (*data efficiency*). We formally define *data effectiveness* for the first time and propose two core techniques: (1) Learnability-Quality Scoring (LQS), which jointly models sample learnability and intrinsic quality; and (2) Folding Ordering (FO), a dynamic reordering strategy that mitigates catastrophic forgetting and distributional shift. These are integrated into a multi-stage, adaptive data orchestration framework. Experiments demonstrate that LQS+FO significantly enhances LM performance—measured by perplexity and downstream task accuracy—without increasing training data volume or model parameters. Crucially, data effectiveness and efficiency exhibit synergistic gains, enabling higher performance per token. Our approach establishes a new paradigm for efficient and effective LM training grounded in data-centric optimization.

Technology Category

Application Category

📝 Abstract
Data is fundamental to the training of language models (LM). Recent research has been dedicated to data efficiency, which aims to maximize performance by selecting a minimal or optimal subset of training data. Techniques such as data filtering, sampling, and selection play a crucial role in this area. To complement it, we define Data Efficacy, which focuses on maximizing performance by optimizing the organization of training data and remains relatively underexplored. This work introduces a general paradigm, DELT, for considering data efficacy in LM training, which highlights the significance of training data organization. DELT comprises three components: Data Scoring, Data Selection, and Data Ordering. Among these components, we design Learnability-Quality Scoring (LQS), as a new instance of Data Scoring, which considers both the learnability and quality of each data sample from the gradient consistency perspective. We also devise Folding Ordering (FO), as a novel instance of Data Ordering, which addresses issues such as model forgetting and data distribution bias. Comprehensive experiments validate the data efficacy in LM training, which demonstrates the following: Firstly, various instances of the proposed DELT enhance LM performance to varying degrees without increasing the data scale and model size. Secondly, among these instances, the combination of our proposed LQS for data scoring and Folding for data ordering achieves the most significant improvement. Lastly, data efficacy can be achieved together with data efficiency by applying data selection. Therefore, we believe that data efficacy is a promising foundational area in LM training.
Problem

Research questions and friction points this paper is trying to address.

Optimizing data organization to maximize language model performance
Introducing DELT paradigm for data scoring, selection, and ordering
Addressing model forgetting and bias via learnability-quality scoring
Innovation

Methods, ideas, or system contributions that make the work stand out.

DELT paradigm optimizes data organization for LMs
LQS scores data by learnability and quality
Folding Ordering reduces forgetting and bias
🔎 Similar Papers
No similar papers found.