A Unified Representation of Neural Networks Architectures

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the continuous-limit modeling of neural network architectures as width and depth tend to infinity. We propose the Distributed-Parameter Neural Network (DiPaNet) framework—a unified mathematical formalism that rigorously establishes a homogenization–discretization duality between finite-dimensional networks and infinite-dimensional continuous models. Methodologically, DiPaNet integrates integral-equation modeling, discrete-error analysis of neural ODEs, continuous-limit derivation, and uniform continuity characterization of weight functions. Theoretically, we derive explicit approximation error bounds and prove that canonical architectures—including finite-width/depth networks, continuous neural networks, neural ODEs, and residual CNNs—are all special cases of DiPaNet under distinct scaling regimes. This provides the first deterministic theoretical foundation for continuous neural network modeling, unifying disparate paradigms within a single coherent framework.

Technology Category

Application Category

📝 Abstract
In this paper we consider the limiting case of neural networks (NNs) architectures when the number of neurons in each hidden layer and the number of hidden layers tend to infinity thus forming a continuum, and we derive approximation errors as a function of the number of neurons and/or hidden layers. Firstly, we consider the case of neural networks with a single hidden layer and we derive an integral infinite width neural representation that generalizes existing continuous neural networks (CNNs) representations. Then we extend this to deep residual CNNs that have a finite number of integral hidden layers and residual connections. Secondly, we revisit the relation between neural ODEs and deep residual NNs and we formalize approximation errors via discretization techniques. Then, we merge these two approaches into a unified homogeneous representation of NNs as a Distributed Parameter neural Network (DiPaNet) and we show that most of the existing finite and infinite-dimensional NNs architectures are related via homogeneization/discretization with the DiPaNet representation. Our approach is purely deterministic and applies to general, uniformly continuous matrix weight functions. Differences and similarities with neural fields are discussed along with further possible generalizations and applications of the DiPaNet framework.
Problem

Research questions and friction points this paper is trying to address.

Derives approximation errors for infinite-width and infinite-depth neural networks
Unifies neural ODEs and deep residual networks via discretization techniques
Proposes a homogeneous DiPaNet representation for most existing neural architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified neural network representation via continuum limit
Deep residual CNNs with integral hidden layers
Homogeneous DiPaNet framework for diverse architectures