Architecture independent generalization bounds for overparametrized deep ReLU networks

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the generalization of overparameterized deep ReLU networks: can their test error be bounded independently of network architecture—i.e., depth, width, parameter count, and VC dimension—and instead depend solely on data geometry, regularity of the ReLU activation, and operator norms of weights and ℓ² norms of biases? Method: The authors derive the first architecture-agnostic, explicit generalization upper bound via metric-geometric analysis, leveraging the Lipschitz continuity of ReLU and jointly constraining weight operator norms and bias ℓ² norms. Contribution/Results: They prove that, with sample size scaling only linearly in input dimension, there exists a zero-training-loss solution; moreover, the generalization error remains controlled and does not degrade with increasing overparameterization. This provides a rigorous, model-size-independent generalization guarantee for deep ReLU networks—grounded in geometric and analytic properties rather than classical complexity measures.

Technology Category

Application Category

📝 Abstract
We prove that overparametrized neural networks are able to generalize with a test error that is independent of the level of overparametrization, and independent of the Vapnik-Chervonenkis (VC) dimension. We prove explicit bounds that only depend on the metric geometry of the test and training sets, on the regularity properties of the activation function, and on the operator norms of the weights and norms of biases. For overparametrized deep ReLU networks with a training sample size bounded by the input space dimension, we explicitly construct zero loss minimizers without use of gradient descent, and prove that the generalization error is independent of the network architecture.
Problem

Research questions and friction points this paper is trying to address.

Generalization bounds for overparametrized ReLU networks
Test error independent of overparametrization and VC dimension
Zero loss minimizers without gradient descent
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bounds depend on metric geometry and activation regularity
Construct zero loss minimizers without gradient descent
Generalization error independent of network architecture
🔎 Similar Papers
No similar papers found.
Thomas Chen
Thomas Chen
University of Texas at Austin
AnalysisDeep LearningMathematical Physics
C
Chun-Kai Kevin Chien
P
Patricia Munoz Ewald
A
Andrew G. Moore