PIP$^2$ Net: Physics-informed Partition Penalty Deep Operator Network

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep operator networks (e.g., DeepONet, FNO) face critical bottlenecks in parametric PDE solving: heavy reliance on large training datasets, absence of explicit physical structure, and unstable trunk network features—such as mode collapse. To address these, we propose POU-PI-DeepONet, a physics-informed operator learning framework grounded in the partition-of-unity (POU) principle. It introduces a physics-guided regional penalty mechanism that explicitly embeds PDE prior constraints and adaptive domain-wise regularization, thereby significantly enhancing trunk output consistency and representational capacity. Evaluated on Burgers, Allen–Cahn, and diffusion–reaction equations, POU-PI-DeepONet consistently outperforms DeepONet, PI-DeepONet, and POU-DeepONet—achieving superior prediction accuracy and generalization robustness under data-limited regimes. This work establishes a new paradigm for interpretable, low-data-dependent, physics-driven operator learning.

Technology Category

Application Category

📝 Abstract
Operator learning has become a powerful tool for accelerating the solution of parameterized partial differential equations (PDEs), enabling rapid prediction of full spatiotemporal fields for new initial conditions or forcing functions. Existing architectures such as DeepONet and the Fourier Neural Operator (FNO) show strong empirical performance but often require large training datasets, lack explicit physical structure, and may suffer from instability in their trunk-network features, where mode imbalance or collapse can hinder accurate operator approximation. Motivated by the stability and locality of classical partition-of-unity (PoU) methods, we investigate PoU-based regularization techniques for operator learning and develop a revised formulation of the existing POU--PI--DeepONet framework. The resulting emph{P}hysics-emph{i}nformed emph{P}artition emph{P}enalty Deep Operator Network (PIP$^{2}$ Net) introduces a simplified and more principled partition penalty that improved the coordinated trunk outputs that leads to more expressiveness without sacrificing the flexibility of DeepONet. We evaluate PIP$^{2}$ Net on three nonlinear PDEs: the viscous Burgers equation, the Allen--Cahn equation, and a diffusion--reaction system. The results show that it consistently outperforms DeepONet, PI-DeepONet, and POU-DeepONet in prediction accuracy and robustness.
Problem

Research questions and friction points this paper is trying to address.

Improves operator learning for PDEs with physics-informed partition penalty
Addresses instability and mode collapse in existing neural operator architectures
Enhances prediction accuracy for nonlinear partial differential equations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-informed partition penalty for regularization
Partition-of-unity methods to stabilize trunk networks
Simplified penalty improves expressiveness and flexibility
🔎 Similar Papers
No similar papers found.
H
Hongjin Mi
School of Mathematics, Shanghai University of Finance and Economics, No.777 Guoding Road, Shanghai 200433, China
H
Huiqiang Lun
Faculty of Liberal Arts and Professional Studies, York University, 4700 Keele St, North York, ON M3J1P3, Canada
Changhong Mou
Changhong Mou
Assistant Professor, Department of Mathematics & Statistics, Utah State University
Reduced Order ModelingData AssimilationMachine LearningComputational Mathematics
Y
Yeyu Zhang
School of Mathematics, Shanghai University of Finance and Economics, No.777 Guoding Road, Shanghai 200433, China