🤖 AI Summary
This work addresses the optimal approximation of multidimensional partial differential equations (PDEs) in finite-dimensional spaces. We propose PG-VarMiON, a novel operator network that for the first time embeds the optimal Petrov–Galerkin weak formulation directly into a deep operator architecture, implicitly learning test functions adapted to the target norm and thereby circumventing the intractability of explicit high-dimensional weighted function construction. Our method integrates variational-principle-driven network design, supervised learning, and PDE weak-form modeling, and establishes a theoretical error bound incorporating the learning error of the weighting mechanism. Experiments demonstrate that PG-VarMiON significantly outperforms state-of-the-art operator networks on convection–diffusion equations—particularly under limited training data—achieving superior accuracy, enhanced generalization, and improved numerical robustness.
📝 Abstract
The optimal Petrov-Galerkin formulation to solve partial differential equations (PDEs) recovers the best approximation in a specified finite-dimensional (trial) space with respect to a suitable norm. However, the recovery of this optimal solution is contingent on being able to construct the optimal weighting functions associated with the trial basis. While explicit constructions are available for simple one- and two-dimensional problems, such constructions for a general multidimensional problem remain elusive. In the present work, we revisit the optimal Petrov-Galerkin formulation through the lens of deep learning. We propose an operator network framework called Petrov-Galerkin Variationally Mimetic Operator Network (PG-VarMiON), which emulates the optimal Petrov-Galerkin weak form of the underlying PDE. The PG-VarMiON is trained in a supervised manner using a labeled dataset comprising the PDE data and the corresponding PDE solution, with the training loss depending on the choice of the optimal norm. The special architecture of the PG-VarMiON allows it to implicitly learn the optimal weighting functions, thus endowing the proposed operator network with the ability to generalize well beyond the training set. We derive approximation error estimates for PG-VarMiON, highlighting the contributions of various error sources, particularly the error in learning the true weighting functions. Several numerical results are presented for the advection-diffusion equation to demonstrate the efficacy of the proposed method. By embedding the Petrov-Galerkin structure into the network architecture, PG-VarMiON exhibits greater robustness and improved generalization compared to other popular deep operator frameworks, particularly when the training data is limited.