Redefining Neural Operators in $d+1$ Dimensions

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of existing neural operators in capturing spatiotemporal evolution mechanisms of dynamical systems. We propose the Schrödingerized Kernel Neural Operator (SKNO), the first neural operator explicitly defined over a $(d+1)$-dimensional spacetime domain—extending beyond the conventional $d$-dimensional spatial domain—to model linear functional evolution processes. Inspired by quantum-mechanical PDE simulation, SKNO integrates a $(d+1)$-dimensional kernel integral operator, a Schrödinger-style evolution block, and lifting/recovering operators into a unified architecture. On multiple PDE benchmarks, SKNO achieves state-of-the-art accuracy. Notably, it is the first neural operator to demonstrate zero-shot super-resolution, confirming that high-dimensional spacetime modeling fundamentally enhances representational capacity for dynamical systems.

Technology Category

Application Category

📝 Abstract
Neural Operators have emerged as powerful tools for learning mappings between function spaces. Among them, the kernel integral operator has been widely validated on universally approximating various operators. Although recent advancements following this definition have developed effective modules to better approximate the kernel function defined on the original domain (with $d$ dimensions, $d=1, 2, 3...$), the unclarified evolving mechanism in the embedding spaces blocks our view to design neural operators that can fully capture the target system evolution. Drawing on recent breakthroughs in quantum simulation of partial differential equations (PDEs), we elucidate the linear evolution process in neural operators. Based on that, we redefine neural operators on a new $d+1$ dimensional domain. Within this framework, we implement our proposed Schr""odingerised Kernel Neural Operator (SKNO) aligning better with the $d+1$ dimensional evolution. In experiments, our $d+1$ dimensional evolving linear block performs far better than others. Also, we test SKNO's SOTA performance on various benchmark tests and also the zero-shot super-resolution task. In addition, we analyse the impact of different lifting and recovering operators on the prediction within the redefined NO framework, reflecting the alignment between our model and the underlying $d+1$ dimensional evolution.
Problem

Research questions and friction points this paper is trying to address.

Clarify evolution mechanism in neural operator embedding spaces
Redefine neural operators in d+1 dimensions for better evolution capture
Improve performance on operator approximation and super-resolution tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Redefines neural operators in d+1 dimensions
Implements Schrxf6dingerised Kernel Neural Operator
Enhances performance with linear evolution blocks
🔎 Similar Papers
No similar papers found.