Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary Domains

๐Ÿ“… 2025-05-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the low accuracy, poor generalization, and inefficiency in learning solution operators for partial differential equations (PDEs) on arbitrary geometric domains, this paper proposes a geometry-aware multi-scale graph attention neural operator framework. The method encodes domain geometry into learnable embeddings and integrates graph neural operators with vision Transformer-based processors within an end-to-end encoderโ€“decoder architecture, enabling strong cross-geometry and cross-resolution generalization while preserving computational efficiency. Innovatively combining multi-scale attention mechanisms with geometric prior modeling, it significantly enhances the expressivity and robustness of operator learning. Evaluated on diverse PDE benchmarks and a large-scale 3D industrial computational fluid dynamics (CFD) dataset, the approach consistently outperforms existing state-of-the-art methods, achieving substantial speedups in both training and inference, and demonstrating excellent scalability.

Technology Category

Application Category

๐Ÿ“ Abstract
The very challenging task of learning solution operators of PDEs on arbitrary domains accurately and efficiently is of vital importance to engineering and industrial simulations. Despite the existence of many operator learning algorithms to approximate such PDEs, we find that accurate models are not necessarily computationally efficient and vice versa. We address this issue by proposing a geometry aware operator transformer (GAOT) for learning PDEs on arbitrary domains. GAOT combines novel multiscale attentional graph neural operator encoders and decoders, together with geometry embeddings and (vision) transformer processors to accurately map information about the domain and the inputs into a robust approximation of the PDE solution. Multiple innovations in the implementation of GAOT also ensure computational efficiency and scalability. We demonstrate this significant gain in both accuracy and efficiency of GAOT over several baselines on a large number of learning tasks from a diverse set of PDEs, including achieving state of the art performance on a large scale three-dimensional industrial CFD dataset.
Problem

Research questions and friction points this paper is trying to address.

Learning PDE solution operators on arbitrary domains accurately and efficiently
Balancing accuracy and computational efficiency in neural PDE surrogates
Handling diverse PDEs with geometry-aware transformer architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometry aware operator transformer for PDEs
Multiscale attentional graph neural operators
Geometry embeddings with transformer processors
๐Ÿ”Ž Similar Papers
No similar papers found.
Shizheng Wen
Shizheng Wen
School of Physics and Electronic Electrical Engineering, Huaiyin Normal University
Electron TransportNEGFDensity functional theoryDFT
A
Arsh Kumbhat
Seminar for Applied Mathematics, ETH Zurich, Switzerland
L
Levi Lingsch
Seminar for Applied Mathematics, ETH Zurich, Switzerland; ETH AI Center, Zurich, Switzerland
S
Sepehr Mousavi
Seminar for Applied Mathematics, ETH Zurich, Switzerland; Department of Mechanical and Process Engineering, ETH Zurich, Switzerland
Y
Yizhou Zhao
School of Computer Science, CMU, USA
Praveen Chandrashekar
Praveen Chandrashekar
Center for Applicable Mathematics, Tata Institute of Fundamental Research
CFDFEMOptimization
Siddhartha Mishra
Siddhartha Mishra
Professor of Applied Mathematics, ETH Zurich, Switzerland
Applied MathematicsNumerical AnalysisScientific computingComputational Fluid and Plasma DynamicsApplied PDEs