🤖 AI Summary
This work addresses the instability of activation magnitudes in deep Leaky ReLU networks at initialization, particularly in narrow architectures where conventional He and orthogonal initializations fail to ensure stable signal propagation. The authors introduce, for the first time, the Lyapunov exponent as a tool to analyze activation dynamics in deep networks. By leveraging the law of large numbers and the central limit theorem, they characterize the asymptotic behavior of the logarithm of activation norms and explicitly compute the Lyapunov exponent using random matrix theory. Building on this analysis, they propose Lyapunov initialization, which enforces a zero Lyapunov exponent to achieve optimal activation stability. Empirical results demonstrate that this method significantly outperforms existing initialization strategies and substantially improves training performance in narrow, deep networks.
📝 Abstract
The development of effective initialization methods requires an understanding of random neural networks. In this work, a rigorous probabilistic analysis of deep unbiased Leaky ReLU networks is provided. We prove a Law of Large Numbers and a Central Limit Theorem for the logarithm of the norm of network activations, establishing that, as the number of layers increases, their growth is governed by a parameter called the Lyapunov exponent. This parameter characterizes a sharp phase transition between vanishing and exploding activations, and we calculate the Lyapunov exponent explicitly for Gaussian or orthogonal weight matrices. Our results reveal that standard methods, such as He initialization or orthogonal initialization, do not guarantee activation stabilty for deep networks of low width. Based on these theoretical insights, we propose a novel initialization method, referred to as Lyapunov initialization, which sets the Lyapunov exponent to zero and thereby ensures that the neural network is as stable as possible, leading empirically to improved learning.