🤖 AI Summary
This work addresses the challenge of optimally allocating computational resources between general pretraining and domain-specific fine-tuning for language models in multi-domain scenarios. The authors propose a scaling-law-based optimization method that trains multiple models in parallel on general corpora and leverages empirical scaling laws to accurately predict loss across varying model sizes and data volumes, enabling reliable extrapolation to larger scales. This approach dynamically determines the optimal split of compute between general pretraining and continued domain-adaptive pretraining. Experimental results demonstrate consistent and significant performance gains across diverse model scales and computational budgets on commonsense and reasoning benchmarks, marking the first achievement of efficient, cross-scale and cross-domain resource allocation for large language models.
📝 Abstract
Language models achieve impressive performance on a variety of knowledge, language, and reasoning tasks due to the scale and diversity of pretraining data available. The standard training recipe is a two-stage paradigm: pretraining first on the full corpus of data followed by specialization on a subset of high quality, specialized data from the full corpus. In the multi-domain setting, this involves continued pretraining of multiple models on each specialized domain, referred to as split model training. We propose a method for pretraining multiple models independently over a general pretraining corpus, and determining the optimal compute allocation between pretraining and continued pretraining using scaling laws. Our approach accurately predicts the loss of a model of size N with D pretraining and D' specialization tokens, and extrapolates to larger model sizes and number of tokens. Applied to language model training, our approach improves performance consistently across common sense knowledge and reasoning benchmarks across different model sizes and compute budgets.