Learning Dynamics of Deep Linear Networks Beyond the Edge of Stability

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the learning dynamics of deep linear networks (DLNs) under gradient descent when crossing the edge of stability (EOS). To address observed phenomena—including loss oscillations, chaotic evolution, and breakdown of symmetry-preserving conservation laws—we propose a matrix-decomposition-based dynamical modeling framework. Integrating Hessian spectral analysis, singular value balancing gap evolution, and periodic orbit stability theory, we rigorously characterize the monotonic decay mechanism of conservation laws for the first time and establish a quantitative relationship between the dimension of the oscillatory subspace and the learning rate. We theoretically prove the existence of 2-periodic orbits and their subspace confinement, explaining two empirical observations: the absence of EOS in shallow networks and the localization of oscillations to top-layer features. Experiments validate our theoretical predictions and further reveal EOS-like oscillatory behavior in nonlinear networks.

Technology Category

Application Category

📝 Abstract
Deep neural networks trained using gradient descent with a fixed learning rate $eta$ often operate in the regime of"edge of stability"(EOS), where the largest eigenvalue of the Hessian equilibrates about the stability threshold $2/eta$. In this work, we present a fine-grained analysis of the learning dynamics of (deep) linear networks (DLNs) within the deep matrix factorization loss beyond EOS. For DLNs, loss oscillations beyond EOS follow a period-doubling route to chaos. We theoretically analyze the regime of the 2-period orbit and show that the loss oscillations occur within a small subspace, with the dimension of the subspace precisely characterized by the learning rate. The crux of our analysis lies in showing that the symmetry-induced conservation law for gradient flow, defined as the balancing gap among the singular values across layers, breaks at EOS and decays monotonically to zero. Overall, our results contribute to explaining two key phenomena in deep networks: (i) shallow models and simple tasks do not always exhibit EOS; and (ii) oscillations occur within top features. We present experiments to support our theory, along with examples demonstrating how these phenomena occur in nonlinear networks and how they differ from those which have benign landscape such as in DLNs.
Problem

Research questions and friction points this paper is trying to address.

Analyzing learning dynamics of deep linear networks beyond stability edge.
Exploring loss oscillations and period-doubling route to chaos in DLNs.
Investigating symmetry-induced conservation law breakdown at stability threshold.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes deep linear networks beyond stability edge
Characterizes loss oscillations via period-doubling route
Explains symmetry-induced conservation law breakdown
🔎 Similar Papers
No similar papers found.