🤖 AI Summary
This work addresses the learning and reduced-order modeling of invariant manifolds (IMs) in discrete dynamical systems. Methodologically, it proposes a physics-informed hybrid neural-analytic approach that couples shallow neural networks with orthogonal polynomial (Legendre/Chebyshev) power series. This integration synergistically combines the local analytic guarantees of polynomials with the global representational capacity of neural networks—ensuring strict exponential convergence near fixed points while overcoming the limited radius of convergence inherent to purely polynomial methods. Innovatively, it is the first framework to unify analyticity constraints, numerical stability optimization, and physics-informed embedding within IM learning. Evaluated on three benchmark problems, the method achieves significantly higher approximation accuracy than pure polynomial or pure neural network baselines, maintains controllable training cost, and exhibits markedly improved convergence robustness.
📝 Abstract
We propose a hybrid machine learning scheme to learn -- in physics-informed and numerical analysis-informed fashion -- invariant manifolds (IM) of discrete maps for constructing reduced-order models (ROMs) for dynamical systems. The proposed scheme combines polynomial series with shallow neural networks, exploiting the complementary strengths of both approaches. Polynomials enable an efficient and accurate modeling of ROMs with guaranteed local exponential convergence rate around the fixed point, where, under certain assumptions, the IM is demonstrated to be analytic. Neural networks provide approximations to more complex structures beyond the reach of the polynomials' convergence. We evaluate the efficiency of the proposed scheme using three benchmark examples, examining convergence behavior, numerical approximation accuracy, and computational training cost. Additionally, we compare the IM approximations obtained solely with neural networks and with polynomial expansions. We demonstrate that the proposed hybrid scheme outperforms both pure polynomial approximations (power series, Legendre and Chebyshev polynomials) and standalone shallow neural network approximations in terms of numerical approximation accuracy.