The Finetuner's Fallacy: When to Pretrain with Your Finetuning Data

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of overfitting and catastrophic forgetting of general knowledge when fine-tuning large language models in data-scarce vertical domains. To mitigate this, the authors propose a strategy termed Specialized Pre-Training (SPT), which integrates small-scale domain-specific data into the pre-training phase at a controlled repetition ratio, rather than relying solely on fine-tuning. Theoretically, the paper identifies the "fine-tuner’s fallacy" and derives an overfitting scaling law to guide the optimal data repetition rate. Empirical results demonstrate that SPT achieves comparable domain performance with 1.75× fewer pre-training tokens. Notably, in domains distant from web text—such as chemistry, music, and mathematics—a 1B-parameter SPT model outperforms a standard 3B-parameter model, substantially reducing computational costs while preserving or enhancing task-specific efficacy.

Technology Category

Application Category

📝 Abstract
Real-world model deployments demand strong performance on narrow domains where data is often scarce. Typically, practitioners finetune models to specialize them, but this risks overfitting to the domain and forgetting general knowledge. We study a simple strategy, specialized pretraining (SPT), where a small domain dataset, typically reserved for finetuning, is repeated starting from pretraining as a fraction of the total tokens. Across three specialized domains (ChemPile, MusicPile, and ProofPile), SPT improves domain performance and preserves general capabilities after finetuning compared to standard pretraining. In our experiments, SPT reduces the pretraining tokens needed to reach a given domain performance by up to 1.75x. These gains grow when the target domain is underrepresented in the pretraining corpus: on domains far from web text, a 1B SPT model outperforms a 3B standard pretrained model. Beyond these empirical gains, we derive overfitting scaling laws to guide practitioners in selecting the optimal domain-data repetition for a given pretraining compute budget. Our observations reveal the finetuner's fallacy: while finetuning may appear to be the cheapest path to domain adaptation, introducing specialized domain data during pretraining stretches its utility. SPT yields better specialized domain performance (via reduced overfitting across repeated exposures) and better general domain performance (via reduced forgetting during finetuning), ultimately achieving stronger results with fewer parameters and less total compute when amortized over inference. To get the most out of domain data, incorporate it as early in training as possible.
Problem

Research questions and friction points this paper is trying to address.

domain adaptation
overfitting
catastrophic forgetting
pretraining
finetuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Specialized Pretraining
Overfitting Scaling Laws
Domain Adaptation
Pretraining-Finetuning Gap
Data Efficiency
🔎 Similar Papers
No similar papers found.