Discretization Error of Fourier Neural Operators

📅 2024-05-03
🏛️ arXiv.org
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses aliasing errors introduced in the discrete implementation of Fourier Neural Operators (FNOs), which remain theoretically unquantified despite their practical significance. Specifically, the discrepancy between the continuous FNO formulation and its grid-based discretization has not been systematically characterized, and isolating this discretization error from other sources—such as approximation and optimization errors—remains challenging. Method: Leveraging tools from Fourier analysis, numerical functional analysis, and FFT theory, we derive an explicit algebraic convergence rate for the discretization error with respect to grid resolution and establish its quantitative dependence on the Sobolev regularity of the input function. Contribution/Results: We provide the first verifiable upper bound on this error, revealing intrinsic trade-offs among resolution, input smoothness, and model stability. Numerical experiments confirm the theoretical prediction: for inputs with higher Sobolev regularity, the error decays significantly under mesh refinement—thereby furnishing a rigorous foundation for reliable discrete FNO design.

Technology Category

Application Category

📝 Abstract
Operator learning is a variant of machine learning that is designed to approximate maps between function spaces from data. The Fourier Neural Operator (FNO) is a common model architecture used for operator learning. The FNO combines pointwise linear and nonlinear operations in physical space with pointwise linear operations in Fourier space, leading to a parameterized map acting between function spaces. Although FNOs formally involve convolutions of functions on a continuum, in practice the computations are performed on a discretized grid, allowing efficient implementation via the FFT. In this paper, the aliasing error that results from such a discretization is quantified and algebraic rates of convergence in terms of the grid resolution are obtained as a function of the regularity of the input. Numerical experiments that validate the theory and describe model stability are performed.
Problem

Research questions and friction points this paper is trying to address.

Analyzing discretization error between continuous and implemented Fourier Neural Operators
Establishing algebraic convergence rates based on grid resolution and input regularity
Developing algorithm to optimize training using error decomposition analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

FNO combines physical and Fourier space operations
Analyzes discretization error versus continuum definition
Leverages error decomposition to optimize training time
🔎 Similar Papers
No similar papers found.
S
S. Lanthaler
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA
A
Andrew M. Stuart
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA
Margaret Trautner
Margaret Trautner
California Institute of Technology
Dynamical systemsmachine learningmultiscale modeling