Hypothesis Spaces for Deep Learning

📅 2024-03-05
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning lacks a rigorous mathematical framework, particularly a well-defined function space for deep neural networks (DNNs). Method: This work constructs, for the first time, a reproducing kernel Banach space (RKBS) as the hypothesis space for DNNs. It treats DNNs as joint functions of inputs and parameters, completes the parameter space under the weak*-topology, linearly spans the resulting set to form a Banach space, and explicitly derives its reproducing kernel. Contributions/Results: (1) It establishes the first RKBS framework tailored to DNNs; (2) it proves that solutions to regularized learning and minimum-norm interpolation problems are necessarily finite linear combinations of kernel sections; (3) it provides a unified representation theorem that precisely characterizes solution structure and enhances theoretical interpretability. By grounding DNNs in functional analysis—specifically via RKBS theory—this work furnishes a rigorous analytical foundation for deep learning.

Technology Category

Application Category

📝 Abstract
This paper introduces a hypothesis space for deep learning that employs deep neural networks (DNNs). By treating a DNN as a function of two variables, the physical variable and parameter variable, we consider the primitive set of the DNNs for the parameter variable located in a set of the weight matrices and biases determined by a prescribed depth and widths of the DNNs. We then complete the linear span of the primitive DNN set in a weak* topology to construct a Banach space of functions of the physical variable. We prove that the Banach space so constructed is a reproducing kernel Banach space (RKBS) and construct its reproducing kernel. We investigate two learning models, regularized learning and minimum interpolation problem in the resulting RKBS, by establishing representer theorems for solutions of the learning models. The representer theorems unfold that solutions of these learning models can be expressed as linear combination of a finite number of kernel sessions determined by given data and the reproducing kernel.
Problem

Research questions and friction points this paper is trying to address.

Constructing Banach space for deep neural networks
Establishing reproducing kernel in RKBS framework
Solving learning models via representer theorems
Innovation

Methods, ideas, or system contributions that make the work stand out.

DNNs as functions of input and parameter variables
Construct Banach space via weak* closure
RKBS framework with representer theorems
🔎 Similar Papers
No similar papers found.
R
Rui Wang
School of Mathematics, Jilin University, Changchun 130012 , P. R. China
Yuesheng Xu
Yuesheng Xu
Professor of Data Science & Math, Old Dominion University; Professor Emeritus, Syracuse University
numerical analysisapproximation theorymachine learningimage/signal processing
M
Mingsong Yan
Department of Mathematics and Statistics, Old Dominion University, Norfolk, VA 23529, USA