🤖 AI Summary
Existing operator learning methods (e.g., DeepONet, FNO) for solving partial differential equations (PDEs) on irregular geometric domains are constrained by their reliance on regular grids and poor performance under sparse, non-uniform sampling. To address this, we propose a mesh-free operator learning framework based on graph neural networks. Our approach constructs adaptive point-cloud graphs to faithfully represent irregular domains and integrates graph attention mechanisms with a learnable complex-coefficient Fourier-domain encoder to enhance global spatial dependency modeling—particularly under limited data. Evaluated on diverse 2D PDE benchmarks—including Darcy flow, advection, Eikonal, and nonlinear diffusion—the method consistently outperforms DeepONet and FNO under sparse sampling regimes, achieving superior prediction accuracy and improved generalization robustness across unseen domain geometries and boundary conditions.
📝 Abstract
Operator learning seeks to approximate mappings from input functions to output solutions, particularly in the context of partial differential equations (PDEs). While recent advances such as DeepONet and Fourier Neural Operator (FNO) have demonstrated strong performance, they often rely on regular grid discretizations, limiting their applicability to complex or irregular domains. In this work, we propose a Graph-based Operator Learning with Attention (GOLA) framework that addresses this limitation by constructing graphs from irregularly sampled spatial points and leveraging attention-enhanced Graph Neural Netwoks (GNNs) to model spatial dependencies with global information. To improve the expressive capacity, we introduce a Fourier-based encoder that projects input functions into a frequency space using learnable complex coefficients, allowing for flexible embeddings even with sparse or nonuniform samples. We evaluated our approach across a range of 2D PDEs, including Darcy Flow, Advection, Eikonal, and Nonlinear Diffusion, under varying sampling densities. Our method consistently outperforms baselines, particularly in data-scarce regimes, demonstrating strong generalization and efficiency on irregular domains.