Derivative-Informed Fourier Neural Operator: Universal Approximation and Applications to PDE-Constrained Optimization

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional Fourier Neural Operators (FNOs) suffer from insufficient derivative accuracy in PDE-constrained optimization, limiting their reliability for gradient-based methods. Method: We propose the Derivative-Informed Fourier Neural Operator (DIFNO), introducing derivative-aware supervised learning and a derivative-guided multi-resolution training paradigm, integrated with dimensionality reduction for computational efficiency. Contribution/Results: DIFNO establishes the first joint universal approximation theory for both operators and their Fréchet derivatives within FNO frameworks, with theoretical guarantees in weighted Sobolev spaces—even for unbounded-support inputs. Experiments on infinite-dimensional inverse problems governed by nonlinear diffusion-reaction, Helmholtz, and Navier–Stokes equations demonstrate that DIFNO significantly reduces sample complexity: it achieves high-fidelity response and sensitivity predictions using only minimal training data. This markedly enhances the gradient fidelity and generalization capability of surrogate models in optimization and uncertainty quantification tasks.

Technology Category

Application Category

📝 Abstract
We present approximation theories and efficient training methods for derivative-informed Fourier neural operators (DIFNOs) with applications to PDE-constrained optimization. A DIFNO is an FNO trained by minimizing its prediction error jointly on output and Fréchet derivative samples of a high-fidelity operator (e.g., a parametric PDE solution operator). As a result, a DIFNO can closely emulate not only the high-fidelity operator's response but also its sensitivities. To motivate the use of DIFNOs instead of conventional FNOs as surrogate models, we show that accurate surrogate-driven PDE-constrained optimization requires accurate surrogate Fréchet derivatives. Then, for continuously differentiable operators, we establish (i) simultaneous universal approximation of FNOs and their Fréchet derivatives on compact sets, and (ii) universal approximation of FNOs in weighted Sobolev spaces with input measures that have unbounded supports. Our theoretical results certify the capability of FNOs for accurate derivative-informed operator learning and accurate solution of PDE-constrained optimization. Furthermore, we develop efficient training schemes using dimension reduction and multi-resolution techniques that significantly reduce memory and computational costs for Fréchet derivative learning. Numerical examples on nonlinear diffusion--reaction, Helmholtz, and Navier--Stokes equations demonstrate that DIFNOs are superior in sample complexity for operator learning and solving infinite-dimensional PDE-constrained inverse problems, achieving high accuracy at low training sample sizes.
Problem

Research questions and friction points this paper is trying to address.

Develops derivative-informed Fourier neural operators for PDE-constrained optimization.
Establishes universal approximation theories for operator and derivative learning.
Proposes efficient training methods to reduce computational costs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

DIFNOs train on output and derivative samples for accuracy
Efficient training uses dimension reduction and multi-resolution techniques
DIFNOs achieve high accuracy with low sample complexity