Neural Chains and Discrete Dynamical Systems

📅 2026-01-01
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the intrinsic connection between attention-free Transformers—referred to as neural chains—and discrete dynamical systems, with a focus on their application to solving the Burgers and Eikonal equations. By interpreting neural chains as implicit representations of discretized differential equations, the work reveals that while physics-informed neural networks (PINNs) can produce solutions comparable to those of traditional finite difference (FD) methods, they rely heavily on large numbers of unstructured, randomly effective matrices rather than structured difference operators. This leads to parameter redundancy, high training costs, and limited physical interpretability. Nevertheless, PINNs retain promise for high-dimensional problems. The findings offer a novel perspective on the dynamical learning mechanisms underlying PINNs and clarify their respective advantages and limitations.

Technology Category

Application Category

📝 Abstract
We inspect the analogy between machine-learning (ML) applications based on the transformer architecture without self-attention, {\it neural chains} hereafter, and discrete dynamical systems associated with discretised versions of neural integral and partial differential equations (NIE, PDE). A comparative analysis of the numerical solution of the (viscid and inviscid) Burgers and Eikonal equations via standard numerical discretization (also cast in terms of neural chains) and via PINN's learning is presented and commented on. It is found that standard numerical discretization and PINN learning provide two different paths to acquire essentially the same knowledge about the dynamics of the system. PINN learning proceeds through random matrices which bear no direct relation to the highly structured matrices associated with finite-difference (FD) procedures. Random matrices leading to acceptable solutions are far more numerous than the unique tridiagonal form in matrix space, which explains why the PINN search typically lands on the random ensemble. The price is a much larger number of parameters, causing lack of physical transparency (explainability) as well as large training costs with no counterpart in the FD procedure. However, our results refer to one-dimensional dynamic problems, hence they don't rule out the possibility that PINNs and ML in general, may offer better strategies for high-dimensional problems.
Problem

Research questions and friction points this paper is trying to address.

neural chains
discrete dynamical systems
PINNs
numerical discretization
Burgers equation
Innovation

Methods, ideas, or system contributions that make the work stand out.

neural chains
discrete dynamical systems
physics-informed neural networks
numerical discretization
Burgers equation
🔎 Similar Papers
No similar papers found.