🤖 AI Summary
This work addresses the challenge of catastrophic forgetting and limited performance gains when fine-tuning large language models on data-scarce domains such as mathematics. The authors propose replaying general pretraining data during the fine-tuning phase, which not only mitigates forgetting but also substantially enhances target task performance—particularly when target data are limited. Through controlled experiments across model scales from 150M to 8B parameters, they systematically evaluate varying ratios of replayed data and scheduling strategies for target data. Results demonstrate that, with only 4M target tokens, data efficiency improves by 1.87× during fine-tuning and 2.06× during mid-training. Furthermore, the 8B model achieves a 4.5% increase in success rate on web navigation tasks and a 2% gain in Basque question-answering accuracy.
📝 Abstract
To obtain a language model for a target domain (e.g. math), the current paradigm is to pre-train on a vast amount of generic web text and then fine-tune on the relatively limited amount of target data. Typically, generic data is only mixed in during fine-tuning to prevent catastrophic forgetting of the generic domain. We surprisingly find that replaying the generic data during fine-tuning can actually improve performance on the (less related) target task. Concretely, in a controlled pre-training environment with 4M target tokens, 4B total tokens, and 150M parameter models, generic replay increases target data efficiency by up to $1.87\times$ for fine-tuning and $2.06\times$ for mid-training. We further analyze data schedules that introduce target data during pre-training and find that replay helps more when there is less target data present in pre-training. We demonstrate the success of replay in practice for fine-tuning 8B parameter models, improving agentic web navigation success by $4.5\%$ and Basque question-answering accuracy by $2\%$.