Curriculum Learning for LLM Pretraining: An Analysis of Learning Dynamics

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether curriculum learning in large language model pretraining alters the learning trajectory or merely reshapes the order of data exposure. We implement three linguistically motivated curricula—based on age of acquisition, word frequency, and verb variability—on the Pythia model series, comparing them against random data ordering. Through gradient variance analysis, spectral saturation measurements, and idealized optimization theory, we demonstrate that curriculum learning primarily enhances performance by stabilizing the optimization dynamics within existing learning phases rather than introducing new stages. Empirical results show that curricula significantly reduce gradient noise and output head spectral saturation in smaller models, yielding higher accuracy; however, these benefits diminish with increasing model scale, revealing a scaling-dependent efficacy of curriculum strategies and establishing a theoretical link between difficulty scheduling and optimization stability.

Technology Category

Application Category

📝 Abstract
Curriculum learning changes the order of pre-training data, but it remains unclear whether it changes the learning trajectory or mainly reorders exposure over a fixed trajectory. We train Pythia models (14M-410M parameters) for 300B tokens under three linguistically motivated curricula-Age-of-Acquisition, word frequency, and Verb Variation (VV)-and compare each against Random ordering; at 1B parameters we compare Random and VV. Across orderings, training follows a shared sequence of latent phases, while curricula mainly change within-phase data exposure. In smaller models (up to 160M parameters), Random ordering exhibits higher gradient noise and stronger late-training output-head spectral saturation, alongside lower final accuracy; curricula reduce both effects at matched compute. At larger scales, saturation differences are smaller and curriculum gains shrink. We formalize the link between difficulty pacing and optimization stability in an idealized analysis based on gradient-variance control, and our results point to a practical takeaway: curricula help by stabilizing within-phase optimization rather than by creating new phases.
Problem

Research questions and friction points this paper is trying to address.

Curriculum Learning
LLM Pretraining
Learning Dynamics
Optimization Stability
Gradient Noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curriculum Learning
Learning Dynamics
Gradient Variance
Optimization Stability
Large Language Models
🔎 Similar Papers
No similar papers found.