Coordinate Encoding on Linear Grids for Physics-Informed Neural Networks

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the slow convergence of physics-informed neural networks (PINNs) when solving partial differential equations, a challenge often attributed to spectral bias. To mitigate this issue, the authors propose a coordinate encoding mechanism based on axis-aligned independent linear grids, which employs natural cubic spline interpolation to construct a smooth coordinate mapping over local solution domains. This approach preserves derivative continuity while effectively alleviating spectral bias. The resulting method significantly enhances both the training efficiency and stability of PINNs. Numerical experiments demonstrate that the proposed strategy outperforms existing mesh-free PDE solvers in terms of convergence speed and computational cost.

Technology Category

Application Category

📝 Abstract
In solving partial differential equations (PDEs), machine learning utilizing physical laws has received considerable attention owing to advantages such as mesh-free solutions, unsupervised learning, and feasibility for solving high-dimensional problems. An effective approach is based on physics-informed neural networks (PINNs), which are based on deep neural networks known for their excellent performance in various academic and industrial applications. However, PINNs struggled with model training owing to significantly slow convergence because of a spectral bias problem. In this study, we propose a PINN-based method equipped with a coordinate-encoding layer on linear grid cells. The proposed method improves the training convergence speed by separating the local domains using grid cells. Moreover, it reduces the overall computational cost by using axis-independent linear grid cells. The method also achieves efficient and stable model training by adequately interpolating the encoded coordinates between grid points using natural cubic splines, which guarantees continuous derivative functions of the model computed for the loss functions. The results of numerical experiments demonstrate the effective performance and efficient training convergence speed of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Physics-Informed Neural Networks
spectral bias
slow convergence
partial differential equations
training convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-Informed Neural Networks
Coordinate Encoding
Linear Grid Cells
Natural Cubic Splines
Spectral Bias
🔎 Similar Papers
No similar papers found.
T
Tetsuro Tsuchino
Graduate School of Engineering, Gifu University, Japan
Motoki Shiga
Motoki Shiga
Professor, Tohoku University, Japan
Machine LearningMaterials InformaticsBioinformatics