Optimal Initialization in Depth: Lyapunov Initialization and Limit Theorems for Deep Leaky ReLU Networks

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability of activation magnitudes in deep Leaky ReLU networks at initialization, particularly in narrow architectures where conventional He and orthogonal initializations fail to ensure stable signal propagation. The authors introduce, for the first time, the Lyapunov exponent as a tool to analyze activation dynamics in deep networks. By leveraging the law of large numbers and the central limit theorem, they characterize the asymptotic behavior of the logarithm of activation norms and explicitly compute the Lyapunov exponent using random matrix theory. Building on this analysis, they propose Lyapunov initialization, which enforces a zero Lyapunov exponent to achieve optimal activation stability. Empirical results demonstrate that this method significantly outperforms existing initialization strategies and substantially improves training performance in narrow, deep networks.

Technology Category

Application Category

📝 Abstract
The development of effective initialization methods requires an understanding of random neural networks. In this work, a rigorous probabilistic analysis of deep unbiased Leaky ReLU networks is provided. We prove a Law of Large Numbers and a Central Limit Theorem for the logarithm of the norm of network activations, establishing that, as the number of layers increases, their growth is governed by a parameter called the Lyapunov exponent. This parameter characterizes a sharp phase transition between vanishing and exploding activations, and we calculate the Lyapunov exponent explicitly for Gaussian or orthogonal weight matrices. Our results reveal that standard methods, such as He initialization or orthogonal initialization, do not guarantee activation stabilty for deep networks of low width. Based on these theoretical insights, we propose a novel initialization method, referred to as Lyapunov initialization, which sets the Lyapunov exponent to zero and thereby ensures that the neural network is as stable as possible, leading empirically to improved learning.
Problem

Research questions and friction points this paper is trying to address.

deep neural networks
activation stability
vanishing/exploding activations
network initialization
Leaky ReLU
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lyapunov initialization
Leaky ReLU networks
Lyapunov exponent
deep neural network stability
probabilistic analysis
🔎 Similar Papers
No similar papers found.
C
Constantin Kogler
School of Mathematics, Institute for Advanced Study, Princeton, USA
Tassilo Schwarz
Tassilo Schwarz
Mathematical Institute, University of Oxford
S
Samuel Kittle
Department of Mathematics, University College London, 25 Gordon Street, London WC1H 0AY, United Kingdom