Approximation Rates of Shallow Neural Networks: Barron Spaces, Activation Functions and Optimality Analysis

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the approximation capacity of shallow neural networks for high-dimensional functions residing in the Barron function space. Method: For activation functions of exponential-power type (e.g., ReLU^k), we systematically characterize the dimension- and smoothness-dependent approximation rates under ℓ¹ coefficient constraints and across multiple norms, leveraging tools from functional analysis and approximation theory. Results: We establish that shallow networks fail to achieve optimal approximation rates in the Barron space when either ℓ¹ coefficient constraints are imposed or the target function lacks sufficient smoothness—thereby revealing the intrinsic “curse of dimensionality” in this setting. Crucially, we contrast this mechanism with that in Sobolev spaces, highlighting fundamental differences in their dimensional scaling behaviors. Our work provides the first sharp, matching upper and lower bounds on optimal approximation rates in both Barron and Sobolev spaces. These results furnish rigorous mathematical foundations for activation function design, architecture selection, and high-dimensional learning theory.

Technology Category

Application Category

📝 Abstract
This paper investigates the approximation properties of shallow neural networks with activation functions that are powers of exponential functions. It focuses on the dependence of the approximation rate on the dimension and the smoothness of the function being approximated within the Barron function space. We examine the approximation rates of ReLU$^{k}$ activation functions, proving that the optimal rate cannot be achieved under $ell^{1}$-bounded coefficients or insufficient smoothness conditions. We also establish optimal approximation rates in various norms for functions in Barron spaces and Sobolev spaces, confirming the curse of dimensionality. Our results clarify the limits of shallow neural networks' approximation capabilities and offer insights into the selection of activation functions and network structures.
Problem

Research questions and friction points this paper is trying to address.

Analyzes approximation rates of shallow neural networks with exponential-power activations
Examines dimension/smoothness dependence in Barron spaces and optimality conditions
Establishes approximation limits and activation function selection guidelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes shallow networks with exponential power activations
Establishes optimal rates in Barron and Sobolev spaces
Identifies limitations under bounded coefficient conditions
🔎 Similar Papers
No similar papers found.
Jian Lu
Jian Lu
Shenzhen University
Signal processingImage processingMachine Learning
X
Xiaohuang Huang
School of Mathematical Sciences, Shenzhen University, Shenzhen 518060, China