Some Super-approximation Rates of ReLU Neural Networks for Korobov Functions

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the approximation error of ReLU neural networks for Korobov functions under the $L_p$ and $W^1_p$ norms. To mitigate the curse of dimensionality in high-dimensional function approximation, we propose a novel constructive method integrating sparse-grid finite elements with bit-extraction techniques, explicitly prescribing network width and depth to achieve super-optimal convergence rates. Theoretically, we establish near-optimal approximation bounds: $O(N^{-2m})$ in the $L_p$ norm and $O(N^{-2m+2})$ in the $W^1_p$ norm, where $N$ denotes the total number of trainable parameters—substantially improving upon classical $L_infty$ and $H^1$ error bounds. Our analysis demonstrates that the proposed architecture significantly alleviates dimensional dependence, providing both tight theoretical guarantees and an implementable construction paradigm for efficient neural approximation of high-dimensional smooth functions.

Technology Category

Application Category

📝 Abstract
This paper examines the $L_p$ and $W^1_p$ norm approximation errors of ReLU neural networks for Korobov functions. In terms of network width and depth, we derive nearly optimal super-approximation error bounds of order $2m$ in the $L_p$ norm and order $2m-2$ in the $W^1_p$ norm, for target functions with $L_p$ mixed derivative of order $m$ in each direction. The analysis leverages sparse grid finite elements and the bit extraction technique. Our results improve upon classical lowest order $L_infty$ and $H^1$ norm error bounds and demonstrate that the expressivity of neural networks is largely unaffected by the curse of dimensionality.
Problem

Research questions and friction points this paper is trying to address.

Analyze L_p and W^1_p norm errors for ReLU networks
Derive super-approximation bounds for Korobov functions
Overcome curse of dimensionality in neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReLU networks achieve super-approximation rates
Leverages sparse grid finite elements
Uses bit extraction technique for analysis
🔎 Similar Papers
No similar papers found.
Yuwen Li
Yuwen Li
Zhejiang University
numerical analysisscientific computing
G
Guozhi Zhang
School of Mathematical Sciences, Zhejiang University, 310058, Hangzhou, China