Analysis of Fourier Neural Operators via Effective Field Theory

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fourier Neural Operators (FNOs) lack rigorous theoretical foundations regarding stability, generalization, and frequency-domain behavior in high-dimensional PDE surrogate modeling. Method: This work pioneers the application of Effective Field Theory (EFT) to neural operator analysis, establishing a systematic theoretical framework in infinite-dimensional function spaces. It derives closed-form recurrence relations for layer-wise kernels and four-point vertices, uncovering how nonlinear activations induce high-frequency mode coupling and spectral shifting. Integrating EFT with spectral truncation analysis, it formulates a criticality-driven hyperparameter selection principle and derives critical initialization conditions for weights in wide networks to ensure stable perturbation scaling across depth. Results: Theoretical analysis and empirical validation jointly demonstrate that this principle significantly enhances generalization and feature learning capability. It establishes a novel, interpretable design paradigm for FNOs grounded in physical and statistical principles.

Technology Category

Application Category

📝 Abstract
Fourier Neural Operators (FNOs) have emerged as leading surrogates for high-dimensional partial-differential equations, yet their stability, generalization and frequency behavior lack a principled explanation. We present the first systematic effective-field-theory analysis of FNOs in an infinite-dimensional function space, deriving closed recursion relations for the layer kernel and four-point vertex and then examining three practically important settings-analytic activations, scale-invariant cases and architectures with residual connections. The theory shows that nonlinear activations inevitably couple frequency inputs to high-frequency modes that are otherwise discarded by spectral truncation, and experiments confirm this frequency transfer. For wide networks we obtain explicit criticality conditions on the weight-initialization ensemble that keep small input perturbations to have uniform scale across depth, and empirical tests validate these predictions. Taken together, our results quantify how nonlinearity enables neural operators to capture non-trivial features, supply criteria for hyper-parameter selection via criticality analysis, and explain why scale-invariant activations and residual connections enhance feature learning in FNOs.
Problem

Research questions and friction points this paper is trying to address.

Analyzes stability and frequency behavior of Fourier Neural Operators
Derives criticality conditions for weight initialization in wide networks
Explains enhancement of feature learning via specific activations and connections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Effective field theory analyzes FNOs in infinite dimensions
Nonlinear activations couple frequency inputs to high modes
Criticality conditions ensure uniform perturbation scale
🔎 Similar Papers
2024-05-03arXiv.orgCitations: 12