Learning quadratic neural networks in high dimensions: SGD dynamics and scaling laws

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies SGD dynamics and sample complexity of two-layer neural networks with quadratic activation (i.e., second-order Hermite polynomials) in the high-dimensional regime, where inputs are isotropic Gaussian ($x sim mathcal{N}(0,I_d)$) and labels are generated by $r$ orthogonal signal directions with power-law eigenvalue decay $lambda_j asymp j^{-alpha}$. Under the extended-width scaling $r asymp d^eta$, we introduce a novel synthesis of matrix Riccati differential equations and matrix monotonicity analysis to derive a rigorous infinite-dimensional effective dynamical model, precisely characterizing convergence during the feature-learning phase. Our theory yields sharp power-law scaling laws for prediction risk—explicitly in terms of optimization steps, sample size, and network width—that hold simultaneously in both the population limit and finite-sample online learning settings. The results quantitatively uncover how gradient descent induces implicit regularization and enables structure-aware learning in nonlinear regression.

Technology Category

Application Category

📝 Abstract
We study the optimization and sample complexity of gradient-based training of a two-layer neural network with quadratic activation function in the high-dimensional regime, where the data is generated as $y propto sum_{j=1}^{r}λ_j σleft(langle oldsymbol{θ_j}, oldsymbol{x} angle ight), oldsymbol{x} sim N(0,oldsymbol{I}_d)$, $σ$ is the 2nd Hermite polynomial, and $lbraceoldsymbolθ_j brace_{j=1}^{r} subset mathbb{R}^d$ are orthonormal signal directions. We consider the extensive-width regime $r asymp d^β$ for $βin [0, 1)$, and assume a power-law decay on the (non-negative) second-layer coefficients $λ_jasymp j^{-α}$ for $αgeq 0$. We present a sharp analysis of the SGD dynamics in the feature learning regime, for both the population limit and the finite-sample (online) discretization, and derive scaling laws for the prediction risk that highlight the power-law dependencies on the optimization time, sample size, and model width. Our analysis combines a precise characterization of the associated matrix Riccati differential equation with novel matrix monotonicity arguments to establish convergence guarantees for the infinite-dimensional effective dynamics.
Problem

Research questions and friction points this paper is trying to address.

Analyzing SGD dynamics for quadratic neural networks in high dimensions
Studying sample complexity of gradient-based training for two-layer networks
Deriving scaling laws for prediction risk in feature learning regime
Innovation

Methods, ideas, or system contributions that make the work stand out.

SGD dynamics in high-dimensional quadratic networks
Matrix Riccati equation for convergence analysis
Power-law scaling laws for prediction risk
🔎 Similar Papers
No similar papers found.
G
Gérard Ben Arous
New York University
Murat A. Erdogdu
Murat A. Erdogdu
University of Toronto
Machine LearningOptimizationStatistics
N
N. Mert Vural
University of Toronto and Vector Institute
Denny Wu
Denny Wu
New York University
Machine LearningStatistics