🤖 AI Summary
Existing convergence analyses of gradient descent (GD) for deep neural networks often rely on the theoretical assumption that the GD mapping is non-singular—a condition typically justified under strong smoothness assumptions (e.g., Lipschitz continuity of gradients), which fail for piecewise analytic activation functions such as ReLU.
Method: Leveraging piecewise analytic function theory, measure theory, and differential geometry, we analyze the GD mapping in the weight-bias parameter space without requiring differentiability or global smoothness.
Results: We establish, for the first time, that the GD mapping is non-singular almost everywhere with respect to step size for practical architectures—including fully connected, convolutional, and softmax-attention networks—i.e., it preserves null sets. This guarantees that GD and stochastic GD avoid saddle points and converge to global minima under significantly weaker conditions than prior work. Our result substantially broadens the applicability of existing convergence guarantees beyond smooth settings, providing a more general theoretical foundation for optimization in deep learning.
📝 Abstract
The theory of training deep networks has become a central question of modern machine learning and has inspired many practical advancements. In particular, the gradient descent (GD) optimization algorithm has been extensively studied in recent years. A key assumption about GD has appeared in several recent works: the emph{GD map is non-singular} -- it preserves sets of measure zero under preimages. Crucially, this assumption has been used to prove that GD avoids saddle points and maxima, and to establish the existence of a computable quantity that determines the convergence to global minima (both for GD and stochastic GD). However, the current literature either assumes the non-singularity of the GD map or imposes restrictive assumptions, such as Lipschitz smoothness of the loss (for example, Lipschitzness does not hold for deep ReLU networks with the cross-entropy loss) and restricts the analysis to GD with small step-sizes. In this paper, we investigate the neural network map as a function on the space of weights and biases. We also prove, for the first time, the non-singularity of the gradient descent (GD) map on the loss landscape of realistic neural network architectures (with fully connected, convolutional, or softmax attention layers) and piecewise analytic activations (which includes sigmoid, ReLU, leaky ReLU, etc.) for almost all step-sizes. Our work significantly extends the existing results on the convergence of GD and SGD by guaranteeing that they apply to practical neural network settings and has the potential to unlock further exploration of learning dynamics.