Physics-Guided Transformer (PGT): Physics-Aware Attention Mechanism for PINNs

πŸ“… 2026-03-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of gradient imbalance and physical inconsistency in physics-informed neural networks (PINNs) when reconstructing physical fields from sparse, irregular observations. To overcome these limitations, the authors propose a novel physics-guided Transformer architecture that explicitly integrates physical priors into the self-attention mechanism. Specifically, additive biases derived from the heat kernel encode diffusion dynamics and temporal causality, while FiLM-modulated sinusoidal implicit neural networks adaptively regulate spectral responses. Experiments demonstrate that the model significantly enhances stability and generalization under data scarcity: it achieves a relative LΒ² error of 5.9Γ—10⁻³ in one-dimensional heat equation reconstruction with only 100 observation points, and simultaneously attains low PDE residual (8.3Γ—10⁻⁴) and low relative error (0.034) in two-dimensional flow past a cylinder, outperforming single-objective optimization approaches.
πŸ“ Abstract
Reconstructing continuous physical fields from sparse, irregular observations is a central challenge in scientific machine learning, particularly for systems governed by partial differential equations (PDEs). Existing physics-informed methods typically enforce governing equations as soft penalty terms during optimization, often leading to gradient imbalance, instability, and degraded physical consistency under limited data. We introduce the Physics-Guided Transformer (PGT), a neural architecture that embeds physical structure directly into the self-attention mechanism. Specifically, PGT incorporates a heat-kernel-derived additive bias into attention logits, encoding diffusion dynamics and temporal causality within the representation. Query coordinates attend to these physics-conditioned context tokens, and the resulting features are decoded using a FiLM-modulated sinusoidal implicit network that adaptively controls spectral response. We evaluate PGT on the one-dimensional heat equation and two-dimensional incompressible Navier-Stokes systems. In sparse 1D reconstruction with 100 observations, PGT achieves a relative L2 error of 5.9e-3, significantly outperforming both PINNs and sinusoidal representations. In the 2D cylinder wake problem, PGT uniquely achieves both low PDE residual (8.3e-4) and competitive relative error (0.034), outperforming methods that optimize only one objective. These results demonstrate that embedding physics within attention improves stability, generalization, and physical fidelity under data-scarce conditions.
Problem

Research questions and friction points this paper is trying to address.

physics-informed neural networks
sparse observations
partial differential equations
physical consistency
scientific machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-Guided Transformer
physics-aware attention
heat-kernel bias
FiLM-modulated implicit network
physics-informed learning