Method of Manufactured Learning for Solver-free Training of Neural Operators

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural operator training often relies on expensive physical experiments or high-cost numerical solvers to generate labeled data, limiting scalability and exploration of physical systems. To address this, we propose Manufactured Learning (MML), the first method to integrate the Method of Manufactured Solutions (MMS) into neural operator training. MML analytically constructs smooth, physically consistent solutions satisfying the governing PDEs and derives corresponding source terms via exact differentiation. During training, neural operators learn from these analytic solution–source term pairs; during inference, the source term is set to zero to recover the original homogeneous or driven equation. This eliminates dependence on numerical solvers, enabling solver-agnostic, architecture-independent, physics-informed synthetic data generation. Evaluated on heat conduction, convection, Burgers’, and diffusion-reaction equations, MML enables Fourier neural operators and other architectures to achieve high spectral accuracy, low PDE residual, and strong generalization across domains and parameters.

Technology Category

Application Category

📝 Abstract
Training neural operators to approximate mappings between infinite-dimensional function spaces often requires extensive datasets generated by either demanding experimental setups or computationally expensive numerical solvers. This dependence on solver-based data limits scalability and constrains exploration across physical systems. Here we introduce the Method of Manufactured Learning (MML), a solver-independent framework for training neural operators using analytically constructed, physics-consistent datasets. Inspired by the classical method of manufactured solutions, MML replaces numerical data generation with functional synthesis, i.e., smooth candidate solutions are sampled from controlled analytical spaces, and the corresponding forcing fields are derived by direct application of the governing differential operators. During inference, setting these forcing terms to zero restores the original governing equations, allowing the trained neural operator to emulate the true solution operator of the system. The framework is agnostic to network architecture and can be integrated with any operator learning paradigm. In this paper, we employ Fourier neural operator as a representative example. Across canonical benchmarks including heat, advection, Burgers, and diffusion-reaction equations. MML achieves high spectral accuracy, low residual errors, and strong generalization to unseen conditions. By reframing data generation as a process of analytical synthesis, MML offers a scalable, solver-agnostic pathway toward constructing physically grounded neural operators that retain fidelity to governing laws without reliance on expensive numerical simulations or costly experimental data for training.
Problem

Research questions and friction points this paper is trying to address.

Eliminates dependency on numerical solvers for neural operator training data
Enables physics-consistent operator learning through analytical function synthesis
Provides scalable framework for approximating differential equation solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses analytical synthesis for physics-consistent datasets
Derives forcing fields via differential operators directly
Enables solver-free training of neural operators
🔎 Similar Papers
No similar papers found.
A
Arth Sojitra
Department of Mechanical and Aerospace Engineering, University of Tennessee, Knoxville, TN, USA.
Omer San
Omer San
Associate Professor, Mechanical and Aerospace Engineering, University of Tennessee
Fluid DynamicsNumerical MethodsData AssimilationMachine LearningDigital Twin