🤖 AI Summary
This work investigates the implicit acceleration phenomenon of gradient descent (GD) in training two-layer neural networks. While GD on linear models suffers from an Ω(d) iteration complexity lower bound, we establish, for the first time, a rigorous equivalence between GD and the generalized perceptron algorithm under logistic loss—thereby mapping the nonlinear optimization dynamics to geometrically tractable perceptron updates. Leveraging classical linear algebra and theoretical analysis, complemented by numerical experiments, we prove this equivalence yields a √d-speedup: under minimal realistic assumptions, the iteration complexity improves from Ω(d) to Õ(√d). Our result provides the first analytical explanation for rapid convergence in neural network training and reveals that nonlinearity itself inherently encodes optimization acceleration—bypassing fundamental limitations of linear model theory.
📝 Abstract
Even for the gradient descent (GD) method applied to neural network training, understanding its optimization dynamics, including convergence rate, iterate trajectories, function value oscillations, and especially its implicit acceleration, remains a challenging problem. We analyze nonlinear models with the logistic loss and show that the steps of GD reduce to those of generalized perceptron algorithms (Rosenblatt, 1958), providing a new perspective on the dynamics. This reduction yields significantly simpler algorithmic steps, which we analyze using classical linear algebra tools. Using these tools, we demonstrate on a minimalistic example that the nonlinearity in a two-layer model can provably yield a faster iteration complexity $ ilde{O}(sqrt{d})$ compared to $Ω(d)$ achieved by linear models, where $d$ is the number of features. This helps explain the optimization dynamics and the implicit acceleration phenomenon observed in neural networks. The theoretical results are supported by extensive numerical experiments. We believe that this alternative view will further advance research on the optimization of neural networks.