An optimal Petrov-Galerkin framework for operator networks

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the optimal approximation of multidimensional partial differential equations (PDEs) in finite-dimensional spaces. We propose PG-VarMiON, a novel operator network that for the first time embeds the optimal Petrov–Galerkin weak formulation directly into a deep operator architecture, implicitly learning test functions adapted to the target norm and thereby circumventing the intractability of explicit high-dimensional weighted function construction. Our method integrates variational-principle-driven network design, supervised learning, and PDE weak-form modeling, and establishes a theoretical error bound incorporating the learning error of the weighting mechanism. Experiments demonstrate that PG-VarMiON significantly outperforms state-of-the-art operator networks on convection–diffusion equations—particularly under limited training data—achieving superior accuracy, enhanced generalization, and improved numerical robustness.

Technology Category

Application Category

📝 Abstract
The optimal Petrov-Galerkin formulation to solve partial differential equations (PDEs) recovers the best approximation in a specified finite-dimensional (trial) space with respect to a suitable norm. However, the recovery of this optimal solution is contingent on being able to construct the optimal weighting functions associated with the trial basis. While explicit constructions are available for simple one- and two-dimensional problems, such constructions for a general multidimensional problem remain elusive. In the present work, we revisit the optimal Petrov-Galerkin formulation through the lens of deep learning. We propose an operator network framework called Petrov-Galerkin Variationally Mimetic Operator Network (PG-VarMiON), which emulates the optimal Petrov-Galerkin weak form of the underlying PDE. The PG-VarMiON is trained in a supervised manner using a labeled dataset comprising the PDE data and the corresponding PDE solution, with the training loss depending on the choice of the optimal norm. The special architecture of the PG-VarMiON allows it to implicitly learn the optimal weighting functions, thus endowing the proposed operator network with the ability to generalize well beyond the training set. We derive approximation error estimates for PG-VarMiON, highlighting the contributions of various error sources, particularly the error in learning the true weighting functions. Several numerical results are presented for the advection-diffusion equation to demonstrate the efficacy of the proposed method. By embedding the Petrov-Galerkin structure into the network architecture, PG-VarMiON exhibits greater robustness and improved generalization compared to other popular deep operator frameworks, particularly when the training data is limited.
Problem

Research questions and friction points this paper is trying to address.

Develops PG-VarMiON to solve PDEs using optimal Petrov-Galerkin formulation.
Addresses challenges in constructing optimal weighting functions for multidimensional PDEs.
Enhances robustness and generalization in deep operator frameworks for limited data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

PG-VarMiON emulates optimal Petrov-Galerkin weak form.
Implicitly learns optimal weighting functions via deep learning.
Exhibits robustness and generalization with limited training data.
🔎 Similar Papers
2023-12-09arXiv.orgCitations: 2
P
Philip Charles
Department of Mathematics, University of Maryland
Deep Ray
Deep Ray
Assistant Professor, University of Maryland
Deep LearningConservation lawsUncertainty quantificationCFDPore-scale dynamics
Y
Yue Yu
Department of Mathematics, Lehigh University
Joost Prins
Joost Prins
PhD Candidate, Eindhoven University of Technology
H
Hugo Melchers
Department of Mechanical Engineering, Eindhoven University of Technology
M
Michael R. A. Abdelmalik
Department of Mechanical Engineering, Eindhoven University of Technology
J
Jeffrey Cochran
Oden Institute for Computational Engineering and Sciences, University of Texas at Austin
A
A. Oberai
Department of Aerospace and Mechanical Engineering, University of Southern California
T
Thomas J. R. Hughes
Oden Institute for Computational Engineering and Sciences, University of Texas at Austin
M
Mats G. Larson
Department of Mathematics, Umeå University