🤖 AI Summary
This work investigates the universal limiting behavior of stochastic gradient descent (SGD) dynamics in high-dimensional statistical learning, focusing on loss functions depending only on the projection of parameters onto a low-dimensional subspace spanned by the true parameter vector. Under a non-Gaussian data distribution—specifically, a mixture of product measures satisfying second-order moment matching—the paper establishes, for the first time, that SGD’s summary statistics converge to an autonomous ordinary differential equation (ODE) limit identical to the Gaussian case, provided the initialization coordinates relative to the truth follow a discrete distribution; this confirms universality at the ODE level. Conversely, by constructing counterexamples with aligned initializations and appropriate noise scaling, the authors demonstrate failure of universality at the stochastic differential equation (SDE) level. The analysis integrates high-dimensional asymptotics, moment-matching approximations, and scaling limit theory, yielding the first rigorous asymptotic framework for optimization dynamics under non-Gaussian data.
📝 Abstract
We consider statistical tasks in high dimensions whose loss depends on the data only through its projection into a fixed-dimensional subspace spanned by the parameter vectors and certain ground truth vectors. This includes classifying mixture distributions with cross-entropy loss with one and two-layer networks, and learning single and multi-index models with one and two-layer networks. When the data is drawn from an isotropic Gaussian mixture distribution, it is known that the evolution of a finite family of summary statistics under stochastic gradient descent converges to an autonomous ordinary differential equation (ODE), as the dimension and sample size go to $infty$ and the step size goes to $0$ commensurately. Our main result is that these ODE limits are universal in that this convergence occurs even when the data is drawn from mixtures of product measures provided the first two moments match the corresponding Gaussian distribution and the initialization and ground truth vectors are sufficiently coordinate-delocalized. We complement this by proving two corresponding non-universality results. We provide a simple example where the ODE limits are non-universal if the initialization is coordinate aligned. We also show that the stochastic differential equation limits arising as fluctuations of the summary statistics around their ODE's fixed points are not universal.