Data Selection via Optimal Control for Language Models

πŸ“… 2024-10-09
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of inefficient and unreliable high-quality data selection in large language model (LLM) pretraining. We formulate data selection as a generalized optimal control problem for the first time and derive necessary optimality conditions via Pontryagin’s Maximum Principle (PMP), leading to the PMP-driven Data Selection (PDS) framework. PDS jointly models the data selection policy and model training dynamics, and employs decoupled optimization to ensure scalability. Evaluated on CommonCrawl, PDS accelerates convergence and improves average multi-task performance by 5.2%, scaling effectively to 400B-parameter models. Moreover, it achieves equivalent performance using 1.8Γ— less data, thereby mitigating the scarcity of high-quality pretraining corpora.

Technology Category

Application Category

πŸ“ Abstract
This work investigates the selection of high-quality pre-training data from massive corpora to enhance LMs' capabilities for downstream usage. We formulate data selection as a generalized Optimal Control problem, which can be solved theoretically by Pontryagin's Maximum Principle (PMP), yielding a set of necessary conditions that characterize the relationship between optimal data selection and LM training dynamics. Based on these theoretical results, we introduce PMP-based Data Selection (PDS), a framework that approximates optimal data selection by solving the PMP conditions. In our experiments, we adopt PDS to select data from CommmonCrawl and show that the PDS-selected corpus accelerates the learning of LMs and constantly boosts their performance on a wide range of downstream tasks across various model sizes. Moreover, the benefits of PDS extend to ~400B models trained on ~10T tokens, as evidenced by the extrapolation of the test loss curves according to the Scaling Laws. PDS also improves data utilization when the pre-training data is limited, by reducing the data demand by 1.8 times, which helps mitigate the quick exhaustion of available web-crawled corpora. Our code, model, and data can be found at https://github.com/microsoft/LMOps/tree/main/data_selection.
Problem

Research questions and friction points this paper is trying to address.

Selects high-quality pre-training data for language models
Formulates data selection as an Optimal Control problem
Improves model performance and data utilization efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formulates data selection as Optimal Control problem
Introduces PMP-based Data Selection (PDS) framework
Reduces data demand by 1.8 times effectively
πŸ”Ž Similar Papers
No similar papers found.