Latent Mamba Operator for Partial Differential Equations

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural operators suffer from poor scalability, high computational cost, and difficulty in modeling continuous dynamics and long-range dependencies when solving high-dimensional partial differential equations (PDEs). To address these challenges, we propose a tightly integrated framework combining latent-space state space models (SSMs) with neural operator kernel integrals. We establish, for the first time, a theoretical connection between SSMs and kernel integral operators. Furthermore, we design a multi-scale hybrid architecture that jointly leverages structured grids and unstructured point clouds to handle complex geometries. Our method achieves state-of-the-art performance across diverse PDE benchmarks in solid and fluid mechanics, improving operator approximation accuracy by 32.3% over existing neural operators. It exhibits linear computational complexity, strong generalization across domains, and inherent physical consistency. This work introduces a novel paradigm for efficient, scalable, and physically grounded modeling of high-dimensional PDEs.

Technology Category

Application Category

📝 Abstract
Neural operators have emerged as powerful data-driven frameworks for solving Partial Differential Equations (PDEs), offering significant speedups over numerical methods. However, existing neural operators struggle with scalability in high-dimensional spaces, incur high computational costs, and face challenges in capturing continuous and long-range dependencies in PDE dynamics. To address these limitations, we introduce the Latent Mamba Operator (LaMO), which integrates the efficiency of state-space models (SSMs) in latent space with the expressive power of kernel integral formulations in neural operators. We also establish a theoretical connection between state-space models (SSMs) and the kernel integral of neural operators. Extensive experiments across diverse PDE benchmarks on regular grids, structured meshes, and point clouds covering solid and fluid physics datasets, LaMOs achieve consistent state-of-the-art (SOTA) performance, with a 32.3% improvement over existing baselines in solution operator approximation, highlighting its efficacy in modeling complex PDE solutions.
Problem

Research questions and friction points this paper is trying to address.

Neural operators struggle with high-dimensional scalability
Existing methods have high computational costs
Challenges in capturing continuous long-range PDE dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates state-space models in latent space
Combines kernel integral with neural operators
Achieves SOTA performance in PDE benchmarks
🔎 Similar Papers
No similar papers found.
Karn Tiwari
Karn Tiwari
Indian Institute of Science
Machine Learning
N
Niladri Dutta
Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India
N
N. M. A. Krishnan
Yardi School of AI, Indian Institute of Technology, New Delhi, India; Department of Civil Engineering, Indian Institute of Technology, New Delhi, India
P
P. PrathoshA.
Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore, India