🤖 AI Summary
Existing deep operator networks (e.g., DeepONet, FNO) face critical bottlenecks in parametric PDE solving: heavy reliance on large training datasets, absence of explicit physical structure, and unstable trunk network features—such as mode collapse. To address these, we propose POU-PI-DeepONet, a physics-informed operator learning framework grounded in the partition-of-unity (POU) principle. It introduces a physics-guided regional penalty mechanism that explicitly embeds PDE prior constraints and adaptive domain-wise regularization, thereby significantly enhancing trunk output consistency and representational capacity. Evaluated on Burgers, Allen–Cahn, and diffusion–reaction equations, POU-PI-DeepONet consistently outperforms DeepONet, PI-DeepONet, and POU-DeepONet—achieving superior prediction accuracy and generalization robustness under data-limited regimes. This work establishes a new paradigm for interpretable, low-data-dependent, physics-driven operator learning.
📝 Abstract
Operator learning has become a powerful tool for accelerating the solution of parameterized partial differential equations (PDEs), enabling rapid prediction of full spatiotemporal fields for new initial conditions or forcing functions. Existing architectures such as DeepONet and the Fourier Neural Operator (FNO) show strong empirical performance but often require large training datasets, lack explicit physical structure, and may suffer from instability in their trunk-network features, where mode imbalance or collapse can hinder accurate operator approximation. Motivated by the stability and locality of classical partition-of-unity (PoU) methods, we investigate PoU-based regularization techniques for operator learning and develop a revised formulation of the existing POU--PI--DeepONet framework. The resulting emph{P}hysics-emph{i}nformed emph{P}artition emph{P}enalty Deep Operator Network (PIP$^{2}$ Net) introduces a simplified and more principled partition penalty that improved the coordinated trunk outputs that leads to more expressiveness without sacrificing the flexibility of DeepONet. We evaluate PIP$^{2}$ Net on three nonlinear PDEs: the viscous Burgers equation, the Allen--Cahn equation, and a diffusion--reaction system. The results show that it consistently outperforms DeepONet, PI-DeepONet, and POU-DeepONet in prediction accuracy and robustness.