BWLer: Barycentric Weight Layer Elucidates a Precision-Conditioning Tradeoff for PINNs

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies the primary cause of accuracy limitations in physics-informed neural networks (PINNs) for solving partial differential equations (PDEs) as the inherent numerical ill-conditioning of multilayer perceptron (MLP) architectures—not the PDEs themselves. To address this, we propose the barycentric weight layer (BWLer), which explicitly decouples function representation from derivative computation: high-accuracy derivatives are estimated via barycentric polynomial interpolation and spectral differentiation, augmented by preconditioning and first-order optimization. We首次 discover that MLPs exhibit a machine-precision-level error floor even in the absence of PDE constraints. Incorporating BWLer yields substantial RMSE improvements—up to ten billionfold—across five canonical PDE benchmarks. Notably, the explicit BWLer achieves near-machine precision on multiple problems, thereby overcoming a long-standing accuracy bottleneck in PINNs.

Technology Category

Application Category

📝 Abstract
Physics-informed neural networks (PINNs) offer a flexible way to solve partial differential equations (PDEs) with machine learning, yet they still fall well short of the machine-precision accuracy many scientific tasks demand. In this work, we investigate whether the precision ceiling comes from the ill-conditioning of the PDEs or from the typical multi-layer perceptron (MLP) architecture. We introduce the Barycentric Weight Layer (BWLer), which models the PDE solution through barycentric polynomial interpolation. A BWLer can be added on top of an existing MLP (a BWLer-hat) or replace it completely (explicit BWLer), cleanly separating how we represent the solution from how we take derivatives for the PDE loss. Using BWLer, we identify fundamental precision limitations within the MLP: on a simple 1-D interpolation task, even MLPs with O(1e5) parameters stall around 1e-8 RMSE -- about eight orders above float64 machine precision -- before any PDE terms are added. In PDE learning, adding a BWLer lifts this ceiling and exposes a tradeoff between achievable accuracy and the conditioning of the PDE loss. For linear PDEs we fully characterize this tradeoff with an explicit error decomposition and navigate it during training with spectral derivatives and preconditioning. Across five benchmark PDEs, adding a BWLer on top of an MLP improves RMSE by up to 30x for convection, 10x for reaction, and 1800x for wave equations while remaining compatible with first-order optimizers. Replacing the MLP entirely lets an explicit BWLer reach near-machine-precision on convection, reaction, and wave problems (up to 10 billion times better than prior results) and match the performance of standard PINNs on stiff Burgers' and irregular-geometry Poisson problems. Together, these findings point to a practical path for combining the flexibility of PINNs with the precision of classical spectral solvers.
Problem

Research questions and friction points this paper is trying to address.

Investigates precision limitations in PINNs for PDE solving
Introduces BWLer to address MLP precision-conditioning tradeoff
Enhances accuracy for various PDEs using BWLer techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Barycentric Weight Layer enhances PDE solution precision
Spectral derivatives and preconditioning optimize training accuracy
Explicit BWLer achieves near-machine-precision on multiple PDEs
🔎 Similar Papers
No similar papers found.
J
Jerry Liu
Institute for Computational & Mathematical Engineering, Stanford University
Y
Yasa Baig
Department of Bioengineering, Stanford University
D
Denise Hui Jean Lee
Department of Computer Science, Stanford University
R
Rajat Vadiraj Dwaraknath
Institute for Computational & Mathematical Engineering, Stanford University
Atri Rudra
Atri Rudra
Katherine Johnson Chair in AI, Professor, CSE, University at Buffalo
Structured Linear AlgebraSociety and ComputingCoding TheoryDatabase algorithms
C
Chris Ré
Department of Computer Science, Stanford University