🤖 AI Summary
This work addresses the instability in deep neural network training caused by the accumulation and amplification of singularities in both parameter and representation spaces, which can lead to optimization failure or loss explosion. We uncover, for the first time, a mutually reinforcing “curse of singularity” mechanism between these two spaces and propose Parametric Singularity Smoothing (PSS)—a lightweight, general, and effective method to mitigate this issue. PSS leverages singular value analysis to establish theoretical bounds on gradient norms and suppresses the alignment and growth of singularities by smoothing the singular spectrum of weight matrices. Experiments demonstrate that PSS substantially enhances training stability and generalization across diverse architectures, datasets, and optimizers, and can even restore trainability after training collapse.
📝 Abstract
This work investigates the optimization instability of deep neural networks from a less-explored yet insightful perspective: the emergence and amplification of singularities in the parametric space. Our analysis reveals that parametric singularities inevitably grow with gradient updates and further intensify alignment with representations, leading to increased singularities in the representation space. We show that the gradient Frobenius norms are bounded by the top singular values of the weight matrices, and as training progresses, the mutually reinforcing growth of weight and representation singularities, termed the curse of singularities, relaxes these bounds, escalating the risk of sharp loss explosions. To counter this, we propose Parametric Singularity Smoothing (PSS), a lightweight, flexible, and effective method for smoothing the singular spectra of weight matrices. Extensive experiments across diverse datasets, architectures, and optimizers demonstrate that PSS mitigates instability, restores trainability even after failure, and improves both training efficiency and generalization.