🤖 AI Summary
Existing Fourier Neural Operator (FNO)-based generative models are constrained to regular grids and rectangular domains, limiting their generalization to complex geometries and unstructured discrete meshes. To address this, we propose the first domain-agnostic functional-space generative framework that unifies modeling across arbitrary geometric domains and irregular discretizations. Our method integrates Graph Neural Operators with Transformer-based cross-attention mechanisms, enabling geometry-aware, mesh-invariant representation learning. This work establishes the first paradigm for co-designing neural operators with modern deep generative architectures. Furthermore, we introduce the first standardized benchmark suite specifically designed for functional generative tasks. Extensive experiments demonstrate substantial improvements in generalization and accuracy across generation, inverse modeling, and regression tasks. Notably, our framework achieves robust functional modeling over complex geometries and unstructured discretizations—marking the first such capability in the literature.
📝 Abstract
Generative models in function spaces, situated at the intersection of generative modeling and operator learning, are attracting increasing attention due to their immense potential in diverse scientific and engineering applications. While functional generative models are theoretically domain- and discretization-agnostic, current implementations heavily rely on the Fourier Neural Operator (FNO), limiting their applicability to regular grids and rectangular domains. To overcome these critical limitations, we introduce the Mesh-Informed Neural Operator (MINO). By leveraging graph neural operators and cross-attention mechanisms, MINO offers a principled, domain- and discretization-agnostic backbone for generative modeling in function spaces. This advancement significantly expands the scope of such models to more diverse applications in generative, inverse, and regression tasks. Furthermore, MINO provides a unified perspective on integrating neural operators with general advanced deep learning architectures. Finally, we introduce a suite of standardized evaluation metrics that enable objective comparison of functional generative models, addressing another critical gap in the field.