Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data

📅 2022-02-11
🏛️ Annual Conference Computational Learning Theory
📈 Citations: 75
Influential: 11
📄 PDF
🤖 AI Summary
This work investigates the phenomenon of “benign overfitting” in two-layer neural networks trained via gradient descent on linearly separable data corrupted by adversarial label noise. **Problem:** Can a nonlinear model achieve zero training error—i.e., perfectly fit noisy labels—while simultaneously attaining minimax-optimal test error? **Method:** We analyze gradient descent with random initialization and logistic loss, under the assumption that data follow a class-conditional log-concave distribution. **Contribution/Results:** We provide the first rigorous proof of benign overfitting for a *nonlinear model* under *nonlinear optimization dynamics*, establishing that zero training error and minimax-optimal generalization error can coexist even when a constant fraction of labels is adversarially flipped. This result breaks prior theoretical limitations that relied on linear or kernel-based models, offering the first formal foundation for nonlinear benign overfitting in deep learning.
📝 Abstract
Benign overfitting, the phenomenon where interpolating models generalize well in the presence of noisy data, was first observed in neural network models trained with gradient descent. To better understand this empirical observation, we consider the generalization error of two-layer neural networks trained to interpolation by gradient descent on the logistic loss following random initialization. We assume the data comes from well-separated class-conditional log-concave distributions and allow for a constant fraction of the training labels to be corrupted by an adversary. We show that in this setting, neural networks exhibit benign overfitting: they can be driven to zero training error, perfectly fitting any noisy training labels, and simultaneously achieve minimax optimal test error. In contrast to previous work on benign overfitting that require linear or kernel-based predictors, our analysis holds in a setting where both the model and learning dynamics are fundamentally nonlinear.
Problem

Research questions and friction points this paper is trying to address.

Understanding benign overfitting in neural networks
Analyzing generalization error with noisy data
Exploring nonlinear models and learning dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-layer neural networks trained by gradient descent
Benign overfitting with noisy linear data
Nonlinear model and learning dynamics analysis
🔎 Similar Papers
No similar papers found.