🤖 AI Summary
This paper establishes lower bounds on the $L^2$ approximation error of linearized shallow ReLU$^k$ neural networks on the unit sphere $mathbb{S}^d$, focusing on the saturation order—the theoretical limit of best approximation rates—for highly smooth target functions. Employing tools from harmonic analysis and classical approximation theory, the authors introduce albedo quasi-uniform center sets and leverage precise characterizations of function smoothness on the sphere. They rigorously prove, for the first time, that the exact saturation order for such networks on $mathbb{S}^d$ is $(d+2k+1)/(2d)$. This lower bound matches existing upper bounds, thereby confirming the optimality of the convergence rate. The result bridges a critical gap between linearized neural network approximation and classical spherical approximation theory, revealing fundamental limitations on expressive power and delineating sharp performance boundaries.
📝 Abstract
We prove a saturation theorem for linearized shallow ReLU$^k$ neural networks on the unit sphere $mathbb S^d$. For any antipodally quasi-uniform set of centers, if the target function has smoothness $r> frac{d+2k+1}{2}$, then the best $mathcal{L}^2(mathbb S^d)$ approximation cannot converge faster than order $n^{-frac{d+2k+1}{2d}}$. This lower bound matches existing upper bounds, thereby establishing the exact saturation order $ frac{d+2k+1}{2d}$ for such networks. Our results place linearized neural-network approximation firmly within the classical saturation framework and show that, although ReLU$^k$ networks outperform finite elements under equal degrees $k$, this advantage is intrinsically limited.