🤖 AI Summary
Physics-informed neural networks (PINNs) and operator learning methods face common challenges in solving parametric partial differential equations (PDEs), including loss balancing, robustness to noisy or sparse data, and reliable uncertainty quantification.
Method: We propose a Bayesian framework that unifies physics constraints with deep operator learning. It employs evolutionary multi-objective optimization to adaptively balance physics-informed and operator learning losses; incorporates replica-exchange stochastic gradient Langevin dynamics for efficient posterior sampling; and integrates Fourier neural operators and deep operator networks to enhance generalization.
Contribution/Results: Experiments on the Burgers equation and time-fractional hybrid diffusion-wave equation demonstrate that our method achieves high-accuracy forward and inverse solutions using only a few noisy observations—outperforming state-of-the-art approaches. It exhibits strong noise robustness and delivers well-calibrated, reliable uncertainty estimates.
📝 Abstract
In this paper, we propose an evolutionary Multi-objective Optimization for Replica-Exchange-based Physics-informed Operator learning Network, which is a novel operator learning network to efficiently solve parametric partial differential equations. In forward and inverse settings, this operator learning network only admits minimum requirement of noisy observational data. While physics-informed neural networks and operator learning approaches such as Deep Operator Networks and Fourier Neural Operators offer promising alternatives to traditional numerical solvers, they struggle with balancing operator and physics losses, maintaining robustness under noisy or sparse data, and providing uncertainty quantification. The proposed framework addresses these limitations by integrating: (i) evolutionary multi-objective optimization to adaptively balance operator and physics-based losses in the Pareto front; (ii) replica exchange stochastic gradient Langevin dynamics to improve global parameter-space exploration and accelerate convergence; and (iii) built-in Bayesian uncertainty quantification from stochastic sampling. The proposed operator learning method is tested numerically on several different problems including one-dimensional Burgers equation and the time-fractional mixed diffusion-wave equation. The results indicate that our framework consistently outperforms the general operator learning methods in accuracy, noise robustness, and the ability to quantify uncertainty.