🤖 AI Summary
To address the reconstruction accuracy degradation inherent in the LaSDI framework during latent dynamics learning, this paper proposes a multi-stage Latent Space Dynamics Identification (mLaSDI) method. mLaSDI decouples latent dynamics modeling from reconstruction optimization: (i) an autoencoder is first trained to obtain high-fidelity low-dimensional representations; (ii) the encoder is frozen while a sequential decoder progressively corrects reconstruction residuals; and (iii) equation discovery techniques are integrated to enhance physical interpretability. Experiments on the 1D-1V Vlasov equation demonstrate that mLaSDI reduces prediction error by 37% on average compared to standard LaSDI, shortens training time by approximately 42%, and exhibits strong robustness across diverse network architectures. The core innovation lies in the staged residual correction mechanism, which simultaneously improves reconstruction fidelity and long-term prediction stability without compromising physical interpretability.
📝 Abstract
Accurate numerical solutions of partial differential equations are essential in many scientific fields but often require computationally expensive solvers, motivating reduced-order models (ROMs). Latent Space Dynamics Identification (LaSDI) is a data-driven ROM framework that combines autoencoders with equation discovery to learn interpretable latent dynamics. However, enforcing latent dynamics during training can compromise reconstruction accuracy of the model for simulation data. We introduce multi-stage LaSDI (mLaSDI), a framework that improves reconstruction and prediction accuracy by sequentially learning additional decoders to correct residual errors from previous stages. Applied to the 1D-1V Vlasov equation, mLaSDI consistently outperforms standard LaSDI, achieving lower prediction errors and reduced training time across a wide range of architectures.