🤖 AI Summary
To address the challenge of large-scale, efficient modeling for nonlinear and nonstationary partial differential equations (PDEs), this paper proposes transient-CoMLSim—a novel deep learning framework. Methodologically, it introduces a coupled paradigm of “domain decomposition + latent-space autoregression”: spatial domains are partitioned into local subdomains, each modeling a low-dimensional solution manifold and conditional field representation; temporal evolution is then performed autoregressively in a shared latent space, stabilized via curriculum learning. The framework supports cross-scale extrapolation and generalization to arbitrary computational domain sizes. Experiments across diverse nonstationary PDE benchmarks demonstrate that transient-CoMLSim surpasses both Fourier Neural Operators (FNO) and U-Net in prediction accuracy, long-horizon rollout fidelity, and numerical stability, while substantially reducing computational complexity. It thus overcomes the scalability bottleneck inherent in existing deep learning–based physics simulators.
📝 Abstract
In this paper, we propose a domain-decomposition-based deep learning (DL) framework, named transient-CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs). The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers. Unlike existing state-of-the-art methods that operate on the entire computational domain, our CNN-based autoencoder computes a lower-dimensional basis for solution and condition fields represented on subdomains. Timestepping is performed entirely in the latent space, generating embeddings of the solution variables from the time history of embeddings of solution and condition variables. This approach not only reduces computational complexity but also enhances scalability, making it well-suited for large-scale simulations. Furthermore, to improve the stability of our rollouts, we employ a curriculum learning (CL) approach during the training of the autoregressive model. The domain-decomposition strategy enables scaling to out-of-distribution domain sizes while maintaining the accuracy of predictions -- a feature not easily integrated into popular DL-based approaches for physics simulations. We benchmark our model against two widely-used DL architectures, Fourier Neural Operator (FNO) and U-Net, and demonstrate that our framework outperforms them in terms of accuracy, extrapolation to unseen timesteps, and stability for a wide range of use cases.