Sharp Lower Bounds for Linearized ReLU^k Approximation on the Sphere

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper establishes lower bounds on the $L^2$ approximation error of linearized shallow ReLU$^k$ neural networks on the unit sphere $mathbb{S}^d$, focusing on the saturation order—the theoretical limit of best approximation rates—for highly smooth target functions. Employing tools from harmonic analysis and classical approximation theory, the authors introduce albedo quasi-uniform center sets and leverage precise characterizations of function smoothness on the sphere. They rigorously prove, for the first time, that the exact saturation order for such networks on $mathbb{S}^d$ is $(d+2k+1)/(2d)$. This lower bound matches existing upper bounds, thereby confirming the optimality of the convergence rate. The result bridges a critical gap between linearized neural network approximation and classical spherical approximation theory, revealing fundamental limitations on expressive power and delineating sharp performance boundaries.

Technology Category

Application Category

📝 Abstract
We prove a saturation theorem for linearized shallow ReLU$^k$ neural networks on the unit sphere $mathbb S^d$. For any antipodally quasi-uniform set of centers, if the target function has smoothness $r> frac{d+2k+1}{2}$, then the best $mathcal{L}^2(mathbb S^d)$ approximation cannot converge faster than order $n^{-frac{d+2k+1}{2d}}$. This lower bound matches existing upper bounds, thereby establishing the exact saturation order $ frac{d+2k+1}{2d}$ for such networks. Our results place linearized neural-network approximation firmly within the classical saturation framework and show that, although ReLU$^k$ networks outperform finite elements under equal degrees $k$, this advantage is intrinsically limited.
Problem

Research questions and friction points this paper is trying to address.

Establishes sharp lower bounds for ReLU^k network approximation on spheres
Determines exact saturation order for linearized shallow neural networks
Reveals intrinsic limitations of ReLU^k networks compared to finite elements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proves saturation theorem for linearized ReLU networks
Establishes exact saturation order matching upper bounds
Shows intrinsic limitations of ReLU network advantage
🔎 Similar Papers
No similar papers found.