π€ AI Summary
This study addresses the limitation of conventional large language model pretraining, which optimizes solely for validation loss while overlooking downstream adaptability. The authors systematically investigate the impact of weight decay on model plasticity and find that higher weight decay values, despite potentially worsening pretraining loss, substantially improve fine-tuning performance. Through comprehensive hyperparameter experiments, analyses of representation separability, attention visualizations, and overfitting metrics, they demonstrate that weight decay enhances plasticity by promoting linearly separable representations, regularizing attention structures, and mitigating overfitting. This work is the first to elucidate the beneficial role of weight decay in pretraining and advocates moving beyond cross-entropy loss alone toward more holistic criteria for hyperparameter selection, thereby establishing a new paradigm for model evaluation and optimization.
π Abstract
The prevailing paradigm in large language model (LLM) development is to pretrain a base model, then perform further training to improve performance and model behavior. However, hyperparameter optimization and scaling laws have been studied primarily from the perspective of the base model's validation loss, ignoring downstream adaptability. In this work, we study pretraining from the perspective of model plasticity, that is, the ability of the base model to successfully adapt to downstream tasks through fine-tuning. We focus on the role of weight decay, a key regularization parameter during pretraining. Through systematic experiments, we show that models trained with larger weight decay values are more plastic, meaning they show larger performance gains when fine-tuned on downstream tasks. This phenomenon can lead to counterintuitive trade-offs where base models that perform worse after pretraining can perform better after fine-tuning. Further investigation of weight decay's mechanistic effects on model behavior reveals that it encourages linearly separable representations, regularizes attention matrices, and reduces overfitting on the training data. In conclusion, this work demonstrates the importance of using evaluation metrics beyond cross-entropy loss for hyperparameter optimization and casts light on the multifaceted role of that a single optimization hyperparameter plays in shaping model behavior.