🤖 AI Summary
This work addresses the curse of dimensionality and model misspecification in learning nonlinear operators between infinite-dimensional spaces. We propose a stochastic approximation framework based on Mercer operator-valued kernels. Methodologically, the approach unifies treatment of general kernel structures—including compact and diagonal kernels—by integrating vector-valued reproducing kernel Hilbert spaces (RKHS), spectral decomposition, and interpolation space theory to construct a vector-valued interpolation space with quantifiable model error. Theoretically, we establish, for the first time, dimension-independent polynomial convergence rates, overcoming the linear-rate limitation inherent to scalar-valued kernels (K = kI) and providing rigorous theoretical guarantees for genuinely nonlinear operator learning. Numerical experiments on the two-dimensional Navier–Stokes equations demonstrate high-accuracy modeling and confirm the framework’s effectiveness in mitigating the curse of dimensionality.
📝 Abstract
We develop a stochastic approximation framework for learning nonlinear operators between infinite-dimensional spaces utilizing general Mercer operator-valued kernels. Our framework encompasses two key classes: (i) compact kernels, which admit discrete spectral decompositions, and (ii) diagonal kernels of the form $K(x,x')=k(x,x')T$, where $k$ is a scalar-valued kernel and $T$ is a positive operator on the output space. This broad setting induces expressive vector-valued reproducing kernel Hilbert spaces (RKHSs) that generalize the classical $K=kI$ paradigm, thereby enabling rich structural modeling with rigorous theoretical guarantees. To address target operators lying outside the RKHS, we introduce vector-valued interpolation spaces to precisely quantify misspecification error. Within this framework, we establish dimension-free polynomial convergence rates, demonstrating that nonlinear operator learning can overcome the curse of dimensionality. The use of general operator-valued kernels further allows us to derive rates for intrinsically nonlinear operator learning, going beyond the linear-type behavior inherent in diagonal constructions of $K=kI$. Importantly, this framework accommodates a wide range of operator learning tasks, ranging from integral operators such as Fredholm operators to architectures based on encoder-decoder representations. Moreover, we validate its effectiveness through numerical experiments on the two-dimensional Navier-Stokes equations.