🤖 AI Summary
This paper addresses the generalization of overparameterized deep ReLU networks: can their test error be bounded independently of network architecture—i.e., depth, width, parameter count, and VC dimension—and instead depend solely on data geometry, regularity of the ReLU activation, and operator norms of weights and ℓ² norms of biases?
Method: The authors derive the first architecture-agnostic, explicit generalization upper bound via metric-geometric analysis, leveraging the Lipschitz continuity of ReLU and jointly constraining weight operator norms and bias ℓ² norms.
Contribution/Results: They prove that, with sample size scaling only linearly in input dimension, there exists a zero-training-loss solution; moreover, the generalization error remains controlled and does not degrade with increasing overparameterization. This provides a rigorous, model-size-independent generalization guarantee for deep ReLU networks—grounded in geometric and analytic properties rather than classical complexity measures.
📝 Abstract
We prove that overparametrized neural networks are able to generalize with a test error that is independent of the level of overparametrization, and independent of the Vapnik-Chervonenkis (VC) dimension. We prove explicit bounds that only depend on the metric geometry of the test and training sets, on the regularity properties of the activation function, and on the operator norms of the weights and norms of biases. For overparametrized deep ReLU networks with a training sample size bounded by the input space dimension, we explicitly construct zero loss minimizers without use of gradient descent, and prove that the generalization error is independent of the network architecture.