🤖 AI Summary
Existing data-driven PDE identification methods are constrained by fixed coordinate systems and spatial dimensions, exhibiting poor generalizability. To address this, we propose the first coordinate-agnostic and dimension-agnostic framework for learning scalar field evolution. Our approach pioneers the integration of exterior differential forms into PDE learning, enabling truly coordinate-free and dimension-free spatial modeling. We design an exterior-calculus-based coordinate-invariant representation, a geometry-aware neural network architecture, and a reaction–diffusion/chemotaxis model-informed numerical training paradigm. Extensive evaluation on the FitzHugh–Nagumo, Barkley, and Patlak–Keller–Segel systems demonstrates that models trained in a single spatial setting achieve high-accuracy prediction across varying dimensions, coordinate systems, manifold curvatures, and boundary conditions—marking a significant breakthrough in overcoming the generalization bottleneck in data-driven PDE modeling.
📝 Abstract
The machine learning methods for data-driven identification of partial differential equations (PDEs) are typically defined for a given number of spatial dimensions and a choice of coordinates the data have been collected in. This dependence prevents the learned evolution equation from generalizing to other spaces. In this work, we reformulate the problem in terms of coordinate- and dimension-independent representations, paving the way toward what we call ``spatially liberated"PDE learning. To this end, we employ a machine learning approach to predict the evolution of scalar field systems expressed in the formalism of exterior calculus, which is coordinate-free and immediately generalizes to arbitrary dimensions by construction. We demonstrate the performance of this approach in the FitzHugh-Nagumo and Barkley reaction-diffusion models, as well as the Patlak-Keller-Segel model informed by in-situ chemotactic bacteria observations. We provide extensive numerical experiments that demonstrate that our approach allows for seamless transitions across various spatial contexts. We show that the field dynamics learned in one space can be used to make accurate predictions in other spaces with different dimensions, coordinate systems, boundary conditions, and curvatures.