Dispelling the Curse of Singularities in Neural Network Optimizations

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability in deep neural network training caused by the accumulation and amplification of singularities in both parameter and representation spaces, which can lead to optimization failure or loss explosion. We uncover, for the first time, a mutually reinforcing “curse of singularity” mechanism between these two spaces and propose Parametric Singularity Smoothing (PSS)—a lightweight, general, and effective method to mitigate this issue. PSS leverages singular value analysis to establish theoretical bounds on gradient norms and suppresses the alignment and growth of singularities by smoothing the singular spectrum of weight matrices. Experiments demonstrate that PSS substantially enhances training stability and generalization across diverse architectures, datasets, and optimizers, and can even restore trainability after training collapse.

Technology Category

Application Category

📝 Abstract
This work investigates the optimization instability of deep neural networks from a less-explored yet insightful perspective: the emergence and amplification of singularities in the parametric space. Our analysis reveals that parametric singularities inevitably grow with gradient updates and further intensify alignment with representations, leading to increased singularities in the representation space. We show that the gradient Frobenius norms are bounded by the top singular values of the weight matrices, and as training progresses, the mutually reinforcing growth of weight and representation singularities, termed the curse of singularities, relaxes these bounds, escalating the risk of sharp loss explosions. To counter this, we propose Parametric Singularity Smoothing (PSS), a lightweight, flexible, and effective method for smoothing the singular spectra of weight matrices. Extensive experiments across diverse datasets, architectures, and optimizers demonstrate that PSS mitigates instability, restores trainability even after failure, and improves both training efficiency and generalization.
Problem

Research questions and friction points this paper is trying to address.

optimization instability
singularities
neural networks
parametric space
representation space
Innovation

Methods, ideas, or system contributions that make the work stand out.

singularities
neural network optimization
Parametric Singularity Smoothing
weight matrix spectrum
training instability
🔎 Similar Papers
No similar papers found.
H
Hengjie Cao
Fudan University
M
Mengyi Chen
Fudan University
Yifeng Yang
Yifeng Yang
Department of Computer Science, Shanghai Jiaotong University
Machine Learning
Fang Dong
Fang Dong
Southeast University
Edge CompuingCloudAIOT
R
Ruijun Huang
Fudan University
A
Anrui Chen
Fudan University
J
Jixian Zhou
Fudan University
Mingzhi Dong
Mingzhi Dong
University of Bath
Yujiang Wang
Yujiang Wang
University of Oxford
AI in HealthcareAI4Science
D
Dongsheng Li
Fudan University
W
Wenyi Fang
Huawei
Y
Yuanyi Lin
Huawei
F
Fan Wu
Huawei
Li Shang
Li Shang
Fudan University, Univ. Colorado Boulder
Human-centered Computingmachine learningcomputer systemsVLSI&EDA