Optimal Rates for Generalization of Gradient Descent for Deep ReLU Classification

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the suboptimal generalization rate of gradient descent (GD) in deep ReLU networks: existing analyses either yield only the suboptimal $O(1/sqrt{n})$ rate or incur exponential dependence on depth $L$ due to smoothness assumptions on activations. To resolve this, we establish, for the first time, a generalization bound for deep ReLU networks that matches the minimax-optimal rate $widetilde{O}(1/(ngamma^2))$ of kernel methods. Our key innovation is a “reference-model-based activation pattern control” technique, which substantially improves the precision of Rademacher complexity analysis. Integrating this with the NTK separability assumption, GD trajectory analysis, and characterization of activation stability, we derive an excess risk upper bound of $widetilde{O}(L^4(1+gamma L^2)/(ngamma^2))$. This reduces the depth dependence from exponential to polynomial, thereby substantially closing the gap to the kernel-method optimal rate.

Technology Category

Application Category

📝 Abstract
Recent advances have significantly improved our understanding of the generalization performance of gradient descent (GD) methods in deep neural networks. A natural and fundamental question is whether GD can achieve generalization rates comparable to the minimax optimal rates established in the kernel setting. Existing results either yield suboptimal rates of $O(1/sqrt{n})$, or focus on networks with smooth activation functions, incurring exponential dependence on network depth $L$. In this work, we establish optimal generalization rates for GD with deep ReLU networks by carefully trading off optimization and generalization errors, achieving only polynomial dependence on depth. Specifically, under the assumption that the data are NTK separable from the margin $γ$, we prove an excess risk rate of $widetilde{O}(L^4 (1 + γL^2) / (n γ^2))$, which aligns with the optimal SVM-type rate $widetilde{O}(1 / (n γ^2))$ up to depth-dependent factors. A key technical contribution is our novel control of activation patterns near a reference model, enabling a sharper Rademacher complexity bound for deep ReLU networks trained with gradient descent.
Problem

Research questions and friction points this paper is trying to address.

Achieving optimal generalization rates for gradient descent in deep ReLU networks
Overcoming exponential depth dependence in neural network generalization analysis
Establishing minimax-optimal rates with polynomial depth dependence for ReLU classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Achieved optimal generalization rates for deep ReLU networks
Used gradient descent with polynomial depth dependence
Applied novel control of activation patterns for complexity bound
🔎 Similar Papers
No similar papers found.