Beyond Universal Approximation Theorems: Algorithmic Uniform Approximation by Neural Networks Trained with Noisy Data

๐Ÿ“… 2025-08-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the gap between the classical Universal Approximation Theorem (UAT) and practical neural network training under noisy data. We propose the first algorithmically realizable framework achieving uniform approximation on noisy datasets. Methodologically, we design a model-specific stochastic training algorithm augmented with an initial clustering-based attention layer, enabling uniform approximation over the $d$-dimensional unit cube using only $N$ noisy samples. Our theoretical contributions are threefold: (1) the model achieves minimax-optimal parameter complexityโ€”up to logarithmic factors; (2) it simultaneously satisfies interpolation capability, sublinear fine-tuning complexity, and well-behaved Lipschitz regularization; and (3) the approximation error converges asymptotically to the noise-free UAT bound as sample size increases. Collectively, these results substantially bridge the longstanding disconnect between abstract approximation theory and empirical deep learning practice.

Technology Category

Application Category

๐Ÿ“ Abstract
At its core, machine learning seeks to train models that reliably generalize beyond noisy observations; however, the theoretical vacuum in which state-of-the-art universal approximation theorems (UATs) operate isolates them from this goal, as they assume noiseless data and allow network parameters to be chosen freely, independent of algorithmic realism. This paper bridges that gap by introducing an architecture-specific randomized training algorithm that constructs a uniform approximator from $N$ noisy training samples on the $d$-dimensional cube $[0,1]^d$. Our trained neural networks attain the minimax-optimal quantity of extit{trainable} (non-random) parameters, subject to logarithmic factors which vanish under the idealized noiseless sampling assumed in classical UATs. Additionally, our trained models replicate key behaviours of real-world neural networks, absent in standard UAT constructions, by: (1) exhibiting sub-linear parametric complexity when fine-tuning on structurally related and favourable out-of-distribution tasks, (2) exactly interpolating the training data, and (3) maintaining reasonable Lipschitz regularity (after the initial clustering attention layer). These properties bring state-of-the-art UATs closer to practical machine learning, shifting the central open question from algorithmic implementability with noisy samples to whether stochastic gradient descent can achieve comparable guarantees.
Problem

Research questions and friction points this paper is trying to address.

Bridging theoretical approximation theorems with noisy data training
Developing uniform approximators from noisy samples on multidimensional cubes
Achieving minimax-optimal parameters while maintaining practical network behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Architecture-specific randomized training algorithm
Minimax-optimal trainable parameters quantity
Uniform approximator from noisy training samples
๐Ÿ”Ž Similar Papers
No similar papers found.