🤖 AI Summary
This work investigates the phenomenon of “benign overfitting” in two-layer neural networks trained via gradient descent on linearly separable data corrupted by adversarial label noise. **Problem:** Can a nonlinear model achieve zero training error—i.e., perfectly fit noisy labels—while simultaneously attaining minimax-optimal test error? **Method:** We analyze gradient descent with random initialization and logistic loss, under the assumption that data follow a class-conditional log-concave distribution. **Contribution/Results:** We provide the first rigorous proof of benign overfitting for a *nonlinear model* under *nonlinear optimization dynamics*, establishing that zero training error and minimax-optimal generalization error can coexist even when a constant fraction of labels is adversarially flipped. This result breaks prior theoretical limitations that relied on linear or kernel-based models, offering the first formal foundation for nonlinear benign overfitting in deep learning.
📝 Abstract
Benign overfitting, the phenomenon where interpolating models generalize well in the presence of noisy data, was first observed in neural network models trained with gradient descent. To better understand this empirical observation, we consider the generalization error of two-layer neural networks trained to interpolation by gradient descent on the logistic loss following random initialization. We assume the data comes from well-separated class-conditional log-concave distributions and allow for a constant fraction of the training labels to be corrupted by an adversary. We show that in this setting, neural networks exhibit benign overfitting: they can be driven to zero training error, perfectly fitting any noisy training labels, and simultaneously achieve minimax optimal test error. In contrast to previous work on benign overfitting that require linear or kernel-based predictors, our analysis holds in a setting where both the model and learning dynamics are fundamentally nonlinear.