Enhancing LLM Training via Spectral Clipping

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in standard adaptive optimizers for large language model (LLM) training: their neglect of the global spectral structure of weights and gradients, which often leads to excessively large spectral norms in parameter updates and sparse spectral spikes in gradient noise, thereby undermining training stability and generalization. To mitigate this, the authors propose SPECTRA, a framework that enforces spectral norm constraints on updates via post-spectral clipping and optionally suppresses noise spikes through pre-spectral clipping. Theoretically, this approach is shown to be equivalent to Frobenius and ℓ∞ regularization. Algorithmically, SPECTRA employs Newton–Schulz iteration for efficient soft spectral clipping, avoiding costly singular value decomposition while remaining compatible with mainstream optimizers such as AdamW, Signum, and AdEMAMix. Experiments demonstrate consistent reductions in validation loss during LLM pretraining, with the best variant achieving state-of-the-art performance and yielding models with smaller weight norms, confirming its regularizing effect.

Technology Category

Application Category

📝 Abstract
While spectral-based optimizers like Muon operate directly on the spectrum of updates, standard adaptive methods such as AdamW do not account for the global spectral structure of weights and gradients, leaving them vulnerable to two empirical issues in large language model (LLM) training: (i) the optimizer updates can have large spectral norms, potentially destabilizing training and degrading generalization; (ii) stochastic gradient noise can exhibit sparse spectral spikes, with a few dominant singular values much larger than the rest. We propose SPECTRA, a general framework addressing these by (i) post-spectral clipping of updates to enforce spectral-norm constraints; (ii) optional pre-spectral clipping of gradients to suppress spectral noise spikes. We prove that post-clipping constitutes a Composite Frank-Wolfe method with spectral-norm constraints and weight regularization, recovering Frobenius and $\ell_{\infty}$-norm regularization with SGD-based and sign-based methods. We further analyze how pre-clipping mitigates sparse spectral spikes. We propose efficient soft spectral clipping via Newton-Schulz iterations, avoiding expensive SVD. Experiments on LLM pretraining show SPECTRA uniformly improves validation loss for various optimizers, including AdamW, Signum, and AdEMAMix, with the best-performing variants achieving state-of-the-art results. Models trained with SPECTRA exhibit smaller weight norms, confirming the link between spectral clipping and regularization.
Problem

Research questions and friction points this paper is trying to address.

spectral norm
large language model training
gradient noise
optimizer stability
spectral spikes
Innovation

Methods, ideas, or system contributions that make the work stand out.

spectral clipping
large language model training
Frank-Wolfe optimization
gradient noise suppression
Newton-Schulz iteration
🔎 Similar Papers
No similar papers found.