How Does the ReLU Activation Affect the Implicit Bias of Gradient Descent on High-dimensional Neural Network Regression?

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the implicit bias of over-parameterized shallow ReLU neural networks trained via gradient descent on high-dimensional random features. Focusing on the setting of squared loss where global minima are non-unique, the authors introduce a novel primal-dual analysis framework to track the joint dynamics of predictions, data expansion coefficients, and their interactions. They prove that ReLU activation patterns stabilize rapidly with high probability in high dimensions. Theoretically, this study provides the first quantitative characterization of the approximation error between gradient descent solutions and the minimum ℓ²-norm solution, showing that when the feature dimension \(d\) greatly exceeds the sample size \(n\), the implicit bias converges to the minimum ℓ²-norm solution with high probability, up to an error of order \(\Theta(\sqrt{n/d})\). This result bridges the gap between worst-case and orthogonal-data analyses in existing theory.

Technology Category

Application Category

📝 Abstract
Overparameterized ML models, including neural networks, typically induce underdetermined training objectives with multiple global minima. The implicit bias refers to the limiting global minimum that is attained by a common optimization algorithm, such as gradient descent (GD). In this paper, we characterize the implicit bias of GD for training a shallow ReLU model with the squared loss on high-dimensional random features. Prior work showed that the implicit bias does not exist in the worst-case (Vardi and Shamir, 2021), or corresponds exactly to the minimum-l2-norm solution among all global minima under exactly orthogonal data (Boursier et al., 2022). Our work interpolates between these two extremes and shows that, for sufficiently high-dimensional random data, the implicit bias approximates the minimum-l2-norm solution with high probability with a gap on the order $\Theta(\sqrt{n/d})$, where n is the number of training examples and d is the feature dimension. Our results are obtained through a novel primal-dual analysis, which carefully tracks the evolution of predictions, data-span coefficients, as well as their interactions, and shows that the ReLU activation pattern quickly stabilizes with high probability over the random data.
Problem

Research questions and friction points this paper is trying to address.

implicit bias
ReLU activation
gradient descent
overparameterized neural networks
high-dimensional regression
Innovation

Methods, ideas, or system contributions that make the work stand out.

implicit bias
ReLU activation
high-dimensional random features
primal-dual analysis
gradient descent
🔎 Similar Papers
No similar papers found.