Featured Reproducing Kernel Banach Spaces for Learning and Neural Networks

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of classical reproducing kernel Hilbert spaces (RKHS) in modeling learning architectures with non-Hilbertian geometric structures—such as fixed-architecture neural networks equipped with non-quadratic norms—by developing a functional-analytic framework for reproducing kernel Banach spaces (RKBS) with feature maps. Through the introduction of structural conditions, the authors recover key components including feature mappings, kernel construction, and a representer theorem, thereby formulating supervised learning as either a minimum-norm interpolation or a regularized optimization problem. This study establishes, for the first time, a theoretically sound RKBS framework in non-Hilbertian Banach spaces that supports both feature representation and kernel-based learning. It demonstrates that fixed-architecture neural networks naturally induce such spaces, unifying kernel methods and neural networks under a common function-space perspective and significantly extending the applicability of kernel learning principles.

Technology Category

Application Category

📝 Abstract
Reproducing kernel Hilbert spaces provide a foundational framework for kernel-based learning, where regularization and interpolation problems admit finite-dimensional solutions through classical representer theorems. Many modern learning models, however -- including fixed-architecture neural networks equipped with non-quadratic norms -- naturally give rise to non-Hilbertian geometries that fall outside this setting. In Banach spaces, continuity of point-evaluation functionals alone is insufficient to guarantee feature representations or kernel-based learning formulations. In this work, we develop a functional-analytic framework for learning in Banach spaces based on the notion of featured reproducing kernel Banach spaces. We identify the precise structural conditions under which feature maps, kernel constructions, and representer-type results can be recovered beyond the Hilbertian regime. Within this framework, supervised learning is formulated as a minimal-norm interpolation or regularization problem, and existence results together with conditional representer theorems are established. We further extend the theory to vector-valued featured reproducing kernel Banach spaces and show that fixed-architecture neural networks naturally induce special instances of such spaces. This provides a unified function-space perspective on kernel methods and neural networks and clarifies when kernel-based learning principles extend beyond reproducing kernel Hilbert spaces.
Problem

Research questions and friction points this paper is trying to address.

reproducing kernel Banach spaces
non-Hilbertian learning
feature maps
neural networks
representer theorems
Innovation

Methods, ideas, or system contributions that make the work stand out.

featured reproducing kernel Banach spaces
non-Hilbertian learning
representer theorems
neural networks in Banach spaces
kernel methods beyond RKHS
I
Isabel de la Higuera
Dept. of Computer Science and Artificial Intelligence, Andalusian Institute on Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, 18140, Spain
Francisco Herrera
Francisco Herrera
Professor Computer Science and AI, DaSCI Research Institute, Granada University, Spain
Artificial IntelligenceComputational IntelligenceData ScienceTrustworthy AI
M
M. Victoria Velasco
Dpto. de Análisis Matemático, Facultad de Ciencias, Universidad de Granada, 18071, Granada (Spain)