🤖 AI Summary
This work systematically analyzes three fundamental error sources in the linear layers of Fourier Neural Operators (FNOs): statistical error arising from finite samples, rank-approximation error due to spectral truncation, and discretization error induced by grid resolution. It establishes, for the first time, a unified theoretical framework that rigorously models and characterizes both the individual origins and their coupling mechanisms. A discrete Fourier transform (DFT)-based least-squares estimator is constructed, and a comprehensive generalization error analysis framework—yielding tight upper and lower bounds—is developed. The theoretical analysis delivers explicit, quantitative bounds on all three error components, precisely characterizing how sample size, spatial grid resolution, and truncation order jointly govern generalization performance. This work provides the first operator-learning theory for FNOs that simultaneously incorporates statistical and numerical perspectives, thereby filling a critical gap in error decomposition and controllability analysis for neural operators.
📝 Abstract
We study learning-theoretic foundations of operator learning, using the linear layer of the Fourier Neural Operator architecture as a model problem. First, we identify three main errors that occur during the learning process: statistical error due to finite sample size, truncation error from finite rank approximation of the operator, and discretization error from handling functional data on a finite grid of domain points. Finally, we analyze a Discrete Fourier Transform (DFT) based least squares estimator, establishing both upper and lower bounds on the aforementioned errors.