π€ AI Summary
Long-term prediction of complex partial differential equations (PDEs) using reduced-order models (ROMs) suffers from severe degradation in both accuracy and stability. Method: This paper proposes a latent-space dynamical modeling framework that integrates a high-order explicit finite-difference scheme with a multi-step rollout loss function. Specifically, we design a computationally efficient third-order finite-difference discretization to accurately approximate temporal evolution in the latent space, and introduce a rollout loss that explicitly minimizes cumulative prediction error over arbitrary time horizons. Coupled with PDE-parameterized latent-space dimensionality reduction, the model jointly learns a physically consistent low-dimensional manifold and a robust time-integration mechanism during training. Results: Experiments on the 2D Burgers equation demonstrate that our method reduces prediction error by over 40% for forecasts beyond 100 time steps, significantly outperforming state-of-the-art ROMsβwhile achieving high accuracy, strong generalizability, and low computational overhead.
π Abstract
Solving complex partial differential equations is vital in the physical sciences, but often requires computationally expensive numerical methods. Reduced-order models (ROMs) address this by exploiting dimensionality reduction to create fast approximations. While modern ROMs can solve parameterized families of PDEs, their predictive power degrades over long time horizons. We address this by (1) introducing a flexible, high-order, yet inexpensive finite-difference scheme and (2) proposing a Rollout loss that trains ROMs to make accurate predictions over arbitrary time horizons. We demonstrate our approach on the 2D Burgers equation.