Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates universal scaling laws governing neural network training dynamics when model size and training steps grow simultaneously. We identify and empirically validate a “hyper-collapse” phenomenon: under optimal computational budget allocation, normalized loss curves across models of varying scales collapse onto a single trajectory, with deviations smaller than those induced by random initialization seeds. Our analysis integrates stochastic gradient descent (SGD) noise dynamics, learning rate decay schedules, and the power-law structure inherent in neural scaling laws, and is validated across diverse architectures (e.g., Transformers), datasets, and training configurations. Experiments demonstrate strong generalization of hyper-collapse and enable quantitative prediction of loss evolution under arbitrary learning rate schedules. The core contribution is the discovery of a deep universality in training dynamics—providing both a theoretical criterion and practical guidance for optimal large-model scaling.

Technology Category

Application Category

📝 Abstract
What scaling limits govern neural network training dynamics when model size and training time grow in tandem? We show that despite the complex interactions between architecture, training algorithms, and data, compute-optimally trained models exhibit a remarkably precise universality. Specifically, loss curves from models of varying sizes collapse onto a single universal curve when training compute and loss are normalized to unity at the end of training. With learning rate decay, the collapse becomes so tight that differences in the normalized curves across models fall below the noise floor of individual loss curves across random seeds, a phenomenon we term supercollapse. We observe supercollapse across learning rate schedules, datasets, and architectures, including transformers trained on next-token prediction, and find it breaks down when hyperparameters are scaled suboptimally, providing a precise and practical indicator of good scaling. We explain these phenomena by connecting collapse to the power-law structure in typical neural scaling laws, and analyzing a simple yet surprisingly effective model of SGD noise dynamics that accurately predicts loss curves across various learning rate schedules and quantitatively explains the origin of supercollapse.
Problem

Research questions and friction points this paper is trying to address.

Understanding scaling limits in neural network training dynamics
Exploring universal loss curve collapse in compute-optimal models
Investigating hyperparameter impact on scaling and supercollapse phenomena
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal loss curve collapse in compute-optimal training
Supercollapse phenomenon with learning rate decay
SGD noise model predicts loss curves accurately
🔎 Similar Papers
No similar papers found.