Kernel Neural Operators (KNOs) for Scalable, Memory-efficient, Geometrically-flexible Operator Learning

📅 2024-06-30
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing operator learning methods for irregular geometric domains suffer from high memory consumption and poor geometric adaptability. Method: This paper proposes the Deep Integral Operator (DIO) framework, which employs learnable compactly supported kernel functions and a sparsity-aware parametrization scheme to jointly ensure smoothness and computational efficiency; it further incorporates adaptive numerical quadrature to achieve geometry-agnostic modeling, eliminating reliance on structured grids. Contribution/Results: On standard benchmarks, DIO achieves higher accuracy than state-of-the-art neural operators, with improved training and test accuracy. It reduces trainable parameters by over an order of magnitude, significantly enhancing memory efficiency, generalization capability, and geometric robustness.

Technology Category

Application Category

📝 Abstract
This paper introduces the Kernel Neural Operator (KNO), a novel operator learning technique that uses deep kernel-based integral operators in conjunction with quadrature for function-space approximation of operators (maps from functions to functions). KNOs use parameterized, closed-form, finitely-smooth, and compactly-supported kernels with trainable sparsity parameters within the integral operators to significantly reduce the number of parameters that must be learned relative to existing neural operators. Moreover, the use of quadrature for numerical integration endows the KNO with geometric flexibility that enables operator learning on irregular geometries. Numerical results demonstrate that on existing benchmarks the training and test accuracy of KNOs is higher than popular operator learning techniques while using at least an order of magnitude fewer trainable parameters. KNOs thus represent a new paradigm of low-memory, geometrically-flexible, deep operator learning, while retaining the implementation simplicity and transparency of traditional kernel methods from both scientific computing and machine learning.
Problem

Research questions and friction points this paper is trying to address.

Developing scalable operator learning with memory-efficient neural architectures
Enabling geometrically-flexible operator approximation on irregular domains
Overcoming dimensionality curse through expressive trainable kernel designs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses deep kernel-based integral operators for approximation
Decouples kernel choice from numerical integration scheme
Leverages dimension-wise factorization for efficiency
🔎 Similar Papers
2023-10-19International Conference on Machine LearningCitations: 5
M
Matthew Lowery
Kahlert School of Computing, University of Utah, UT, USA
J
John Turnage
Department of Mathematics, University of Utah, UT, USA
Z
Zachary Morrow
Scientific Machine Learning, Sandia National Laboratories
J
J. Jakeman
Optimization and Uncertainty Quantification, Sandia National Laboratories
Akil Narayan
Akil Narayan
University of Utah
Scientific computingnumerical analysisuncertainty quantification
Shandian Zhe
Shandian Zhe
School of Computing, University of Utah
Probabilistic Machine Learning
Varun Shankar
Varun Shankar
Kahlert School of Computing, University of Utah
Scientific Machine LearningScientific Computing