🤖 AI Summary
To address the high computational cost and poor scalability of uncertainty quantification in Physics-Informed Neural Networks (PINNs), this paper proposes Epinet—a lightweight, plug-and-play auxiliary module that enables efficient epistemic uncertainty estimation without retraining the backbone network. Epinet is decoupled from the PINN architecture: a pre-trained deterministic backbone is paired with a small ensemble of lightweight Epinet heads to jointly yield point predictions and predictive variance via deterministic forward passes. On diverse partial differential equation benchmarks, Epinet achieves coverage rates comparable to Bayesian PINNs (B-PINNs) while yielding sharper prediction intervals. Compared to Dropout-PINNs, it reduces calibration error by up to 42%, significantly improving both robustness and calibration quality. Crucially, Epinet incurs substantially lower computational overhead—enabling scalable, practical uncertainty-aware PINN deployment.
📝 Abstract
Physics-informed neural networks (PINNs) have demonstrated promise as a framework for solving forward and inverse problems involving partial differential equations. Despite recent progress in the field, it remains challenging to quantify uncertainty in these networks. While approaches such as Bayesian PINNs (B-PINNs) provide a principled approach to capturing uncertainty through Bayesian inference, they can be computationally expensive for large-scale applications. In this work, we propose Epistemic Physics-Informed Neural Networks (E-PINNs), a framework that leverages a small network, the emph{epinet}, to efficiently quantify uncertainty in PINNs. The proposed approach works as an add-on to existing, pre-trained PINNs with a small computational overhead. We demonstrate the applicability of the proposed framework in various test cases and compare the results with B-PINNs using Hamiltonian Monte Carlo (HMC) posterior estimation and dropout-equipped PINNs (Dropout-PINNs). Our experiments show that E-PINNs provide similar coverage to B-PINNs, with often comparable sharpness, while being computationally more efficient. This observation, combined with E-PINNs' more consistent uncertainty estimates and better calibration compared to Dropout-PINNs for the examples presented, indicates that E-PINNs offer a promising approach in terms of accuracy-efficiency trade-off.