🤖 AI Summary
Physics-informed neural networks (PINNs) struggle to accurately propagate initial conditions when solving partial differential equations (PDEs), primarily due to an intrinsic mismatch between the neural network’s smoothness bias, the inherent continuity requirements of PDE solutions, and discrete temporal sampling. This work is the first to formally identify and analyze this mechanism. To address it, we propose a state-space model (SSM)-based subsequence modeling paradigm: by partitioning the temporal domain into blocks and jointly modeling physical constraints with continuous–discrete hybrid representation learning, our approach explicitly encodes solution continuity as a prior—thereby eliminating the bias and enabling robust long-range initial condition propagation. Evaluated across diverse PDE benchmarks, our method reduces prediction error by up to 86.3% relative to current state-of-the-art methods, establishing new performance frontiers in data-free PDE solving.
📝 Abstract
Physics-Informed Neural Networks (PINNs) are a kind of deep-learning-based numerical solvers for partial differential equations (PDEs). Existing PINNs often suffer from failure modes of being unable to propagate patterns of initial conditions. We discover that these failure modes are caused by the simplicity bias of neural networks and the mismatch between PDE's continuity and PINN's discrete sampling. We reveal that the State Space Model (SSM) can be a continuous-discrete articulation allowing initial condition propagation, and that simplicity bias can be eliminated by aligning a sequence of moderate granularity. Accordingly, we propose PINNMamba, a novel framework that introduces sub-sequence modeling with SSM. Experimental results show that PINNMamba can reduce errors by up to 86.3% compared with state-of-the-art architecture. Our code is available at https://github.com/miniHuiHui/PINNMamba.