🤖 AI Summary
Existing Transformer-based neural operators for solving partial differential equations (PDEs) suffer from high computational complexity (O(N²)), imprecise geometric modeling, and poor generalization beyond structured grids. To address these limitations, this work introduces Mamba—a state-space model (SSM)-based architecture—into neural operators for the first time, yielding a novel framework that ensures both geometric consistency and linear-time complexity. Specifically, we model long-range dependencies via structured SSMs; incorporate coordinate-aware geometric embeddings to explicitly encode differential-geometric priors; and propose a PDE-aware, discretization-invariant parameterization strategy. Evaluated on canonical benchmarks—including Darcy flow and Navier–Stokes equations—our method reduces solution operator approximation error by up to 58.9% compared to current state-of-the-art approaches. This establishes a new paradigm for efficient, geometry-respecting PDE learning.
📝 Abstract
The neural operator (NO) framework has emerged as a powerful tool for solving partial differential equations (PDEs). Recent NOs are dominated by the Transformer architecture, which offers NOs the capability to capture long-range dependencies in PDE dynamics. However, existing Transformer-based NOs suffer from quadratic complexity, lack geometric rigor, and thus suffer from sub-optimal performance on regular grids. As a remedy, we propose the Geometric Mamba Neural Operator (GeoMaNO) framework, which empowers NOs with Mamba's modeling capability, linear complexity, plus geometric rigor. We evaluate GeoMaNO's performance on multiple standard and popularly employed PDE benchmarks, spanning from Darcy flow problems to Navier-Stokes problems. GeoMaNO improves existing baselines in solution operator approximation by as much as 58.9%.