B-PL-PINN: Stabilizing PINN Training with Bayesian Pseudo Labeling

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physics-informed neural networks (PINNs) often suffer from poor convergence when solving forward physics problems due to inefficient propagation of boundary/initial condition information into the interior domain. To address this, we propose Bayesian Physics-Informed Neural Networks (B-PINNs), which replace conventional ensemble-based pseudo-labeling with mathematically rigorous uncertainty-aware pseudo-label generation—leveraging posterior predictive variance for confidence quantification. Guided by high-confidence regions, B-PINNs dynamically expand the active training domain, thereby enhancing information propagation from boundaries/initial conditions into the interior. This yields significantly improved training stability and generalization. Across multiple benchmark PDEs, B-PINNs outperform state-of-the-art ensemble-based PINNs and match the accuracy of hybrid Adam-LBFGS-optimized PINNs. The core innovation lies in deeply integrating Bayesian uncertainty estimation into the training-domain expansion mechanism, providing an interpretable, adaptive solution to the long-standing convergence challenge in PINNs.

Technology Category

Application Category

📝 Abstract
Training physics-informed neural networks (PINNs) for forward problems often suffers from severe convergence issues, hindering the propagation of information from regions where the desired solution is well-defined. Haitsiukevich and Ilin (2023) proposed an ensemble approach that extends the active training domain of each PINN based on i) ensemble consensus and ii) vicinity to (pseudo-)labeled points, thus ensuring that the information from the initial condition successfully propagates to the interior of the computational domain. In this work, we suggest replacing the ensemble by a Bayesian PINN, and consensus by an evaluation of the PINN's posterior variance. Our experiments show that this mathematically principled approach outperforms the ensemble on a set of benchmark problems and is competitive with PINN ensembles trained with combinations of Adam and LBFGS.
Problem

Research questions and friction points this paper is trying to address.

Stabilize PINN training for forward problems
Improve information propagation in PINNs
Replace ensemble approach with Bayesian PINN
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian PINN replaces ensemble approach
Posterior variance evaluation ensures stability
Outperforms ensemble on benchmark problems
🔎 Similar Papers
No similar papers found.