🤖 AI Summary
This work investigates the training convergence dynamics of language models across different random seeds. We identify a four-phase convergence pattern—initial uniformity, rapid convergence, pronounced divergence, and slow re-convergence—and quantify cross-seed divergence using per-token KL divergence. Our analysis is stratified by model scale, training step, and linguistic features (token frequency and part-of-speech tags). Key findings are: (1) model scale critically governs re-convergence capability—larger models successfully re-converge to stable output distributions in later stages, whereas smaller models fail to do so; (2) convergence exhibits linguistic imbalance, with greater instability observed for low-frequency tokens and specific part-of-speech categories. To our knowledge, this is the first systematic characterization of non-monotonic convergence in language model training, revealing its dependence on model scale and sensitivity to linguistic structure. The study provides a novel perspective on large-model training stability and introduces a quantifiable analytical framework grounded in information-theoretic and linguistic metrics.
📝 Abstract
In this paper, we investigate the convergence of language models (LMs) trained under different random seeds, measuring convergence as the expected per-token Kullback--Leibler (KL) divergence across seeds. By comparing LM convergence as a function of model size and training checkpoint, we identify a four-phase convergence pattern: (i) an initial uniform phase, (ii) a sharp-convergence phase, (iii) a sharp-divergence phase, and (iv) a slow-reconvergence phase. Further, we observe that larger models reconverge faster in later training stages, while smaller models never actually reconverge; these results suggest that a certain model size may be necessary to learn stable distributions. Restricting our analysis to specific token frequencies or part-of-speech (PoS) tags further reveals that convergence is uneven across linguistic categories: frequent tokens and function words converge faster and more reliably than their counterparts (infrequent tokens and content words). Overall, our findings highlight factors that influence the stability of the learned distributions in model training.