🤖 AI Summary
This work addresses the challenge of projection-based model order reduction (PMOR) for convection-dominated problems on unstructured meshes. Methodologically, it introduces a geometric deep LSPG (GD-LSPG) framework: a hierarchical graph autoencoder integrates graph coarsening and message-passing to yield geometry-aware embeddings of unstructured meshes; subsequently, nonlinear manifold-constrained least-squares Petrov–Galerkin (LSPG) projection is performed in an ultra-low-dimensional latent space (e.g., (k = 5)) to ensure robust reduced-order modeling. The key contribution lies in overcoming CNNs’ reliance on structured grids—this is the first integration of graph neural networks with nonlinear LSPG, explicitly encoding both geometric structure and convection physics. Validation on the 1D Burgers equation (structured grid) and 2D Euler equations (unstructured grid) demonstrates over 40% error reduction versus conventional affine projections, superior generalizability, and markedly improved accuracy and stability of low-dimensional representations.
📝 Abstract
This paper presents the development of a graph autoencoder architecture capable of performing projection-based model-order reduction (PMOR) using a nonlinear manifold least-squares Petrov-Galerkin projection scheme. The architecture is particularly useful for advection-dominated flows, as it captures the underlying geometry of the modeled domain to provide a robust nonlinear mapping that can be leveraged in a PMOR setting. The presented graph autoencoder is constructed with a two-part process that consists of (1) generating a hierarchy of reduced graphs to emulate the compressive abilities of convolutional neural networks (CNNs) and (2) training a message passing operation at each step in the hierarchy of reduced graphs to emulate the filtering process of a CNN. The resulting framework provides improved flexibility over traditional CNN-based autoencoders because it is extendable to unstructured meshes. To highlight the capabilities of the proposed framework, which is named geometric deep least-squares Petrov-Galerkin (GD-LSPG), we benchmark the method on a one-dimensional Burgers' model with a structured mesh and demonstrate the flexibility of GD-LSPG by deploying it on two test cases for two-dimensional Euler equations that use an unstructured mesh. The proposed framework is more flexible than using a traditional CNN-based autoencoder and provides considerable improvement in accuracy for very low-dimensional latent spaces in comparison with traditional affine projections.