Pre-training LLM without Learning Rate Decay Enhances Supervised Fine-Tuning

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the impact of learning rate decay schedules on the downstream supervised fine-tuning performance of large language models. Challenging the conventional practice of employing learning rate decay during pretraining to minimize training loss, we propose the Warmup-Stable-Only (WSO) strategy, which maintains a constant learning rate after an initial warmup phase. Experiments on models with 1B and 8B parameters demonstrate that, although WSO yields slightly higher pretraining loss compared to decay-based schedules, it produces a flatter loss landscape that substantially enhances fine-tuning performance across diverse downstream tasks. These findings question the prevailing paradigm that prioritizes pretraining loss minimization as the sole optimization objective.

Technology Category

Application Category

📝 Abstract
We investigate the role of learning rate scheduling in the large-scale pre-training of large language models, focusing on its influence on downstream performance after supervised fine-tuning (SFT). Decay-based learning rate schedulers are widely used to minimize pre-training loss. However, despite their widespread use, how these schedulers affect performance after SFT remains underexplored. In this paper, we examine Warmup-Stable-Only (WSO), which maintains a constant learning rate after warmup without any decay. Through experiments with 1B and 8B parameter models, we show that WSO consistently outperforms decay-based schedulers in terms of performance after SFT, even though decay-based schedulers may exhibit better performance after pre-training. The result also holds across different regimes with mid-training and over-training. Loss landscape analysis further reveals that decay-based schedulers lead models into sharper minima, whereas WSO preserves flatter minima that support adaptability. These findings indicate that applying LR decay to improve pre-training metrics may compromise downstream adaptability. Our work also provides practical guidance for training and model release strategies, highlighting that pre-training models with WSO enhances their adaptability for downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

learning rate scheduling
pre-training
supervised fine-tuning
downstream performance
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

learning rate scheduling
Warmup-Stable-Only
supervised fine-tuning
loss landscape
flat minima