🤖 AI Summary
This work addresses the instability of activation scales in Transformer training, which constrains the maximum usable learning rate. The authors propose SimpleNorm, a normalization method that stabilizes intermediate activations through a second-order geometric perspective, and establish a theoretical connection among architectural design, activation scaling, and the spectral norm of the Hessian. Theoretical analysis demonstrates that SimpleNorm substantially reduces the Hessian spectral norm, thereby enhancing training stability and enabling significantly larger stable learning rates. Empirical validation on GPT models ranging from 1B to 8B parameters shows that SimpleGPT supports learning rates 3–10 times higher than conventional approaches. On a 7B-parameter model, it achieves a loss of 2.208 in only 60K training steps, outperforming baselines such as LLaMA2.
📝 Abstract
In this work, we revisit Transformer optimization through the lens of second-order geometry and establish a direct connection between architectural design, activation scale, the Hessian matrix, and the maximum tolerable learning rate. We introduce a simple normalization strategy, termed SimpleNorm, which stabilizes intermediate activation scales by construction. Then, by analyzing the Hessian of the loss with respect to network activations, we theoretically show that SimpleNorm significantly reduces the spectral norm of the Hessian, thereby permitting larger stable learning rates. We validate our theoretical findings through extensive experiments on large GPT models at parameter scales 1B, 1.4B, 7B and 8B. Empirically, SimpleGPT, our SimpleNorm-based network, tolerates learning rates 3$\times$-10$\times$ larger than standard convention, consistently demonstrates strong optimization stability, and achieves substantially better performance than well-established baselines. Specifically, when training 7B-scale models for 60K steps, SimpleGPT achieves a training loss that is 0.08 lower than that of LLaMA2 with QKNorm, reducing the loss from 2.290 to 2.208. Our source code will be released at https://github.com/Ocram7/SimpleGPT.