🤖 AI Summary
This work addresses the training instability commonly observed during mid-stage width expansion of neural networks, which manifests as imbalanced activation statistics and gradient symmetry, thereby limiting feature diversity and computational efficiency. To systematically resolve these signal instability and gradient symmetry issues, the authors propose a stable progressive learning framework for mid-stage width expansion. The approach enforces RMS-scale consistency to stabilize activation statistics, employs asymmetric optimizer state resetting to break gradient symmetry, and incorporates a learning rate rewarming mechanism. The framework is compatible with diverse optimizers and Mixture-of-Experts (MoE) architectures, achieving up to 35% training cost savings at 2× width expansion compared to training from scratch, while demonstrating robust performance across varying widths and optimizer configurations.
📝 Abstract
Progressive Learning (PL) reduces pre-training computational overhead by gradually increasing model scale. While prior work has extensively explored depth expansion, width expansion remains significantly understudied, with the few existing methods limited to the early stages of training. However, expanding width during the mid-stage is essential for maximizing computational savings, yet it remains a formidable challenge due to severe training instabilities. Empirically, we show that naive initialization at this stage disrupts activation statistics, triggering loss spikes, while copy-based initialization introduces gradient symmetry that hinders feature diversity. To address these issues, we propose SPARKLING (balancing {S}ignal {P}reservation {A}nd symmet{R}y brea{K}ing for width-progressive {L}earn{ING}), a novel framework for mid-stage width expansion. Our method achieves signal preservation via RMS-scale consistency, stabilizing activation statistics during expansion. Symmetry breaking is ensured through asymmetric optimizer state resetting and learning rate re-warmup. Extensive experiments on Mixture-of-Experts (MoE) models demonstrate that, across multiple width axes and optimizer families, SPARKLING consistently outperforms training from scratch and reduces training cost by up to 35% under $2\times$ width expansion.