🤖 AI Summary
This work investigates the distribution and impact of local minima in high-dimensional empirical risk minimization, under isotropic Gaussian data and projection-type matrix parameterizations—including generalized linear models and two-layer neural networks—in the proportional asymptotic regime where $n,d o infty$ and $n asymp d$. Methodologically, it is the first to rigorously apply the Kac–Rice formula to characterize, for convex losses, the existence, location, and precise asymptotic spectrum of the Hessian at local minima of order $k geq 2$, thereby confirming a long-standing conjecture. Combining Gaussian process theory, random matrix analysis, and large deviation estimates, the paper derives a tight upper bound on the expected number of local minima and establishes sharp asymptotic expressions for both estimation and prediction errors, achieving exponential-level deviation control.
📝 Abstract
We consider a general model for high-dimensional empirical risk minimization whereby the data $mathbf{x}_i$ are $d$-dimensional isotropic Gaussian vectors, the model is parametrized by $mathbf{Theta}inmathbb{R}^{d imes k}$, and the loss depends on the data via the projection $mathbf{Theta}^mathsf{T}mathbf{x}_i$. This setting covers as special cases classical statistics methods (e.g. multinomial regression and other generalized linear models), but also two-layer fully connected neural networks with $k$ hidden neurons. We use the Kac-Rice formula from Gaussian process theory to derive a bound on the expected number of local minima of this empirical risk, under the proportional asymptotics in which $n,d oinfty$, with $nasymp d$. Via Markov's inequality, this bound allows to determine the positions of these minimizers (with exponential deviation bounds) and hence derive sharp asymptotics on the estimation and prediction error. In this paper, we apply our characterization to convex losses, where high-dimensional asymptotics were not (in general) rigorously established for $kge 2$. We show that our approach is tight and allows to prove previously conjectured results. In addition, we characterize the spectrum of the Hessian at the minimizer. A companion paper applies our general result to non-convex examples.