Projection-based model-order reduction via graph autoencoders suited for unstructured meshes

📅 2024-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of projection-based model order reduction (PMOR) for convection-dominated problems on unstructured meshes. Methodologically, it introduces a geometric deep LSPG (GD-LSPG) framework: a hierarchical graph autoencoder integrates graph coarsening and message-passing to yield geometry-aware embeddings of unstructured meshes; subsequently, nonlinear manifold-constrained least-squares Petrov–Galerkin (LSPG) projection is performed in an ultra-low-dimensional latent space (e.g., (k = 5)) to ensure robust reduced-order modeling. The key contribution lies in overcoming CNNs’ reliance on structured grids—this is the first integration of graph neural networks with nonlinear LSPG, explicitly encoding both geometric structure and convection physics. Validation on the 1D Burgers equation (structured grid) and 2D Euler equations (unstructured grid) demonstrates over 40% error reduction versus conventional affine projections, superior generalizability, and markedly improved accuracy and stability of low-dimensional representations.

Technology Category

Application Category

📝 Abstract
This paper presents the development of a graph autoencoder architecture capable of performing projection-based model-order reduction (PMOR) using a nonlinear manifold least-squares Petrov-Galerkin projection scheme. The architecture is particularly useful for advection-dominated flows, as it captures the underlying geometry of the modeled domain to provide a robust nonlinear mapping that can be leveraged in a PMOR setting. The presented graph autoencoder is constructed with a two-part process that consists of (1) generating a hierarchy of reduced graphs to emulate the compressive abilities of convolutional neural networks (CNNs) and (2) training a message passing operation at each step in the hierarchy of reduced graphs to emulate the filtering process of a CNN. The resulting framework provides improved flexibility over traditional CNN-based autoencoders because it is extendable to unstructured meshes. To highlight the capabilities of the proposed framework, which is named geometric deep least-squares Petrov-Galerkin (GD-LSPG), we benchmark the method on a one-dimensional Burgers' model with a structured mesh and demonstrate the flexibility of GD-LSPG by deploying it on two test cases for two-dimensional Euler equations that use an unstructured mesh. The proposed framework is more flexible than using a traditional CNN-based autoencoder and provides considerable improvement in accuracy for very low-dimensional latent spaces in comparison with traditional affine projections.
Problem

Research questions and friction points this paper is trying to address.

Develops graph autoencoder for projection-based model-order reduction
Handles advection-dominated flows on unstructured meshes
Improves accuracy in low-dimensional latent spaces vs affine projections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph autoencoder for nonlinear PMOR
Hierarchy of reduced graphs like CNNs
Message passing emulates CNN filtering
🔎 Similar Papers
No similar papers found.
L
Liam K. Magargal
Department of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, PA, United States
P
Parisa Khodabakhshi
Department of Mechanical Engineering and Mechanics, Lehigh University, Bethlehem, PA, United States
S
Steven N. Rodriguez
Computational Multiphysics Systems Laboratory, U. S. Naval Research Laboratory, Washington, DC, United States
J
Justin W. Jaworski
Kevin T. Crofton Department of Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA, United States
J
J. Michopoulos
Computational Multiphysics Systems Laboratory, U. S. Naval Research Laboratory, Washington, DC, United States