Kernel-based Stochastic Approximation Framework for Nonlinear Operator Learning

📅 2025-09-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the curse of dimensionality and model misspecification in learning nonlinear operators between infinite-dimensional spaces. We propose a stochastic approximation framework based on Mercer operator-valued kernels. Methodologically, the approach unifies treatment of general kernel structures—including compact and diagonal kernels—by integrating vector-valued reproducing kernel Hilbert spaces (RKHS), spectral decomposition, and interpolation space theory to construct a vector-valued interpolation space with quantifiable model error. Theoretically, we establish, for the first time, dimension-independent polynomial convergence rates, overcoming the linear-rate limitation inherent to scalar-valued kernels (K = kI) and providing rigorous theoretical guarantees for genuinely nonlinear operator learning. Numerical experiments on the two-dimensional Navier–Stokes equations demonstrate high-accuracy modeling and confirm the framework’s effectiveness in mitigating the curse of dimensionality.

Technology Category

Application Category

📝 Abstract
We develop a stochastic approximation framework for learning nonlinear operators between infinite-dimensional spaces utilizing general Mercer operator-valued kernels. Our framework encompasses two key classes: (i) compact kernels, which admit discrete spectral decompositions, and (ii) diagonal kernels of the form $K(x,x')=k(x,x')T$, where $k$ is a scalar-valued kernel and $T$ is a positive operator on the output space. This broad setting induces expressive vector-valued reproducing kernel Hilbert spaces (RKHSs) that generalize the classical $K=kI$ paradigm, thereby enabling rich structural modeling with rigorous theoretical guarantees. To address target operators lying outside the RKHS, we introduce vector-valued interpolation spaces to precisely quantify misspecification error. Within this framework, we establish dimension-free polynomial convergence rates, demonstrating that nonlinear operator learning can overcome the curse of dimensionality. The use of general operator-valued kernels further allows us to derive rates for intrinsically nonlinear operator learning, going beyond the linear-type behavior inherent in diagonal constructions of $K=kI$. Importantly, this framework accommodates a wide range of operator learning tasks, ranging from integral operators such as Fredholm operators to architectures based on encoder-decoder representations. Moreover, we validate its effectiveness through numerical experiments on the two-dimensional Navier-Stokes equations.
Problem

Research questions and friction points this paper is trying to address.

Learning nonlinear operators between infinite-dimensional spaces
Overcoming curse of dimensionality in operator learning
Accommodating various operator learning tasks beyond linear-type behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic approximation with Mercer operator-valued kernels
Vector-valued RKHS for nonlinear operator learning
Dimension-free convergence overcoming curse of dimensionality
🔎 Similar Papers
No similar papers found.
Jia-Qi Yang
Jia-Qi Yang
ByteDance
machine learningdata miningrecommender systems
L
Lei Shi
School of Mathematical Sciences and Shanghai Key Laboratory for Contemporary Applied Mathematics, Fudan University, Shanghai 200433, China.