Deeper or Wider: A Perspective from Optimal Generalization Error with Sobolev Loss

📅 2024-01-31
🏛️ International Conference on Machine Learning
📈 Citations: 12
Influential: 1
📄 PDF
🤖 AI Summary
This work investigates the optimal generalization error trade-off between deep neural networks (DeNNs) and wide neural networks (WeNNs) under Sobolev-norm losses. Addressing the “depth vs. width” architectural selection problem, we establish, for the first time, a theoretically grounded criterion based on Sobolev regularity: wide networks dominate under high parameter budgets, whereas deep architectures excel with large sample sizes and higher-order Sobolev loss regularization. Methodologically, we integrate Sobolev-space generalization error bounds with the Deep Ritz method and the physics-informed neural network (PINN) framework to derive interpretable, theory-driven design principles. These principles are empirically validated on PDE-solving tasks using both Deep Ritz and PINN approaches. Our results provide the first generalization-error-theoretic foundation for depth–width selection in PDE numerical solvers, bridging theoretical learning guarantees with practical neural PDE discretization.

Technology Category

Application Category

📝 Abstract
Constructing the architecture of a neural network is a challenging pursuit for the machine learning community, and the dilemma of whether to go deeper or wider remains a persistent question. This paper explores a comparison between deeper neural networks (DeNNs) with a flexible number of layers and wider neural networks (WeNNs) with limited hidden layers, focusing on their optimal generalization error in Sobolev losses. Analytical investigations reveal that the architecture of a neural network can be significantly influenced by various factors, including the number of sample points, parameters within the neural networks, and the regularity of the loss function. Specifically, a higher number of parameters tends to favor WeNNs, while an increased number of sample points and greater regularity in the loss function lean towards the adoption of DeNNs. We ultimately apply this theory to address partial differential equations using deep Ritz and physics-informed neural network (PINN) methods, guiding the design of neural networks.
Problem

Research questions and friction points this paper is trying to address.

Compares deeper versus wider neural networks for optimal generalization error
Analyzes impact of sample size, parameters, and loss regularity on architecture
Applies theory to neural network design for solving partial differential equations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares deeper versus wider neural network architectures
Analyzes generalization error with Sobolev loss functions
Guides architecture selection based on parameters and data
🔎 Similar Papers
No similar papers found.