🤖 AI Summary
This work addresses the significant limitations in accuracy and convergence of Physics-Informed Neural Networks (PINNs) when training data are inconsistent with the governing partial differential equation (PDE). The authors introduce the concept of a “consistency barrier,” which reveals an inherent lower bound on error arising from the tension between data fidelity and PDE residual enforcement. Through controlled experiments on the 1D viscous Burgers equation—leveraging analytical solutions, multi-fidelity numerical data, and residual-driven training—they demonstrate that PINN errors under low-fidelity data are fundamentally constrained by this consistency barrier. In contrast, when high-fidelity data are employed, the barrier is effectively eliminated, yielding PINN solutions statistically indistinguishable from the analytical solution. This study provides both theoretical insight and empirical evidence to advance the understanding and robustness of PINNs in practical applications.
📝 Abstract
Physics-informed neural networks (PINNs) have gained significant attention as a surrogate modeling strategy for partial differential equations (PDEs), particularly in regimes where labeled data are scarce and physical constraints can be leveraged to regularize the learning process. In practice, however, PINNs are frequently trained using experimental or numerical data that are not fully consistent with the governing equations due to measurement noise, discretization errors, or modeling assumptions. The implications of such data-to-PDE inconsistencies on the accuracy and convergence of PINNs remain insufficiently understood. In this work, we systematically analyze how data inconsistency fundamentally limits the attainable accuracy of PINNs. We introduce the concept of a consistency barrier, defined as an intrinsic lower bound on the error that arises from mismatches between the fidelity of the data and the exact enforcement of the PDE residual. To isolate and quantify this effect, we consider the 1D viscous Burgers equation with a manufactured analytical solution, which enables full control over data fidelity and residual errors. PINNs are trained using datasets of progressively increasing numerical accuracy, as well as perfectly consistent analytical data. Results show that while the inclusion of the PDE residual allows PINNs to partially mitigate low-fidelity data and recover the dominant physical structure, the training process ultimately saturates at an error level dictated by the data inconsistency. When high-fidelity numerical data are employed, PINN solutions become indistinguishable from those trained on analytical data, indicating that the consistency barrier is effectively removed. These findings clarify the interplay between data quality and physics enforcement in PINNs providing practical guidance for the construction and interpretation of physics-informed surrogate models.