🤖 AI Summary
Existing studies primarily focus on scalar-valued neural networks and their connections to reproducing kernel Banach spaces (RKBSs), leaving the functional space structure of vector-valued neural networks (e.g., ℝᵈ-valued networks, DeepONets, hypernetworks) and neural operators within RKBS frameworks systematically uncharacterized.
Method: We propose a general framework of vector-valued reproducing kernel Banach spaces (vv-RKBSs), establishing a rigorous theory without requiring symmetry, finite-dimensional output, or separability assumptions—naturally encompassing reproducing kernels and generalizing vv-RKHS properties. Leveraging integral operator analysis and representation theorems, we unify diverse non-scalar architectures.
Contribution/Results: We prove that shallow ℝᵈ-valued networks, DeepONets, and hypernetworks all reside in specific vv-RKBSs, and derive representation theorems for their associated optimization problems—thereby providing a solid functional-analytic foundation for neural operator learning.
📝 Abstract
Recently, there has been growing interest in characterizing the function spaces underlying neural networks. While shallow and deep scalar-valued neural networks have been linked to scalar-valued reproducing kernel Banach spaces (RKBS), $R^d$-valued neural networks and neural operator models remain less understood in the RKBS setting. To address this gap, we develop a general definition of vector-valued RKBS (vv-RKBS), which inherently includes the associated reproducing kernel. Our construction extends existing definitions by avoiding restrictive assumptions such as symmetric kernel domains, finite-dimensional output spaces, reflexivity, or separability, while still recovering familiar properties of vector-valued reproducing kernel Hilbert spaces (vv-RKHS). We then show that shallow $R^d$-valued neural networks are elements of a specific vv-RKBS, namely an instance of the integral and neural vv-RKBS. To also explore the functional structure of neural operators, we analyze the DeepONet and Hypernetwork architectures and demonstrate that they too belong to an integral and neural vv-RKBS. In all cases, we establish a Representer Theorem, showing that optimization over these function spaces recovers the corresponding neural architectures.