🤖 AI Summary
This work addresses the limitation of classical reproducing kernel Hilbert spaces (RKHS) in modeling learning architectures with non-Hilbertian geometric structures—such as fixed-architecture neural networks equipped with non-quadratic norms—by developing a functional-analytic framework for reproducing kernel Banach spaces (RKBS) with feature maps. Through the introduction of structural conditions, the authors recover key components including feature mappings, kernel construction, and a representer theorem, thereby formulating supervised learning as either a minimum-norm interpolation or a regularized optimization problem. This study establishes, for the first time, a theoretically sound RKBS framework in non-Hilbertian Banach spaces that supports both feature representation and kernel-based learning. It demonstrates that fixed-architecture neural networks naturally induce such spaces, unifying kernel methods and neural networks under a common function-space perspective and significantly extending the applicability of kernel learning principles.
📝 Abstract
Reproducing kernel Hilbert spaces provide a foundational framework for kernel-based learning, where regularization and interpolation problems admit finite-dimensional solutions through classical representer theorems. Many modern learning models, however -- including fixed-architecture neural networks equipped with non-quadratic norms -- naturally give rise to non-Hilbertian geometries that fall outside this setting. In Banach spaces, continuity of point-evaluation functionals alone is insufficient to guarantee feature representations or kernel-based learning formulations. In this work, we develop a functional-analytic framework for learning in Banach spaces based on the notion of featured reproducing kernel Banach spaces. We identify the precise structural conditions under which feature maps, kernel constructions, and representer-type results can be recovered beyond the Hilbertian regime. Within this framework, supervised learning is formulated as a minimal-norm interpolation or regularization problem, and existence results together with conditional representer theorems are established. We further extend the theory to vector-valued featured reproducing kernel Banach spaces and show that fixed-architecture neural networks naturally induce special instances of such spaces. This provides a unified function-space perspective on kernel methods and neural networks and clarifies when kernel-based learning principles extend beyond reproducing kernel Hilbert spaces.