🤖 AI Summary
This work identifies the fundamental limitations of shallow neural networks in approximating and learning high-frequency signals under finite machine precision and computational constraints. Methodologically, it employs matrix condition number analysis, gradient flow modeling, frequency-domain error decomposition, and large-scale numerical experiments. The study establishes, for the first time, quantitative lower bounds on numerical approximation error, characterizes computational complexity bottlenecks, and reveals condition-number-driven instability in high-frequency approximation—thereby unifying accuracy, cost, and stability into a precise theoretical framework. Theoretical analysis and empirical validation jointly demonstrate that: (i) approximation error for high-frequency components is dominated by exponentially deteriorating condition numbers; (ii) the error lower bound grows exponentially with signal frequency; and (iii) computational cost increases polynomially to exponentially with frequency. These results provide critical theoretical criteria and practical design principles for modeling high-frequency signals with neural networks.
📝 Abstract
In this work, a comprehensive numerical study involving analysis and experiments shows why a two-layer neural network has difficulties handling high frequencies in approximation and learning when machine precision and computation cost are important factors in real practice. In particular, the following basic computational issues are investigated: (1) the minimal numerical error one can achieve given a finite machine precision, (2) the computation cost to achieve a given accuracy, and (3) stability with respect to perturbations. The key to the study is the conditioning of the representation and its learning dynamics. Explicit answers to the above questions with numerical verifications are presented.