๐ค AI Summary
This work addresses the gap between the classical Universal Approximation Theorem (UAT) and practical neural network training under noisy data. We propose the first algorithmically realizable framework achieving uniform approximation on noisy datasets. Methodologically, we design a model-specific stochastic training algorithm augmented with an initial clustering-based attention layer, enabling uniform approximation over the $d$-dimensional unit cube using only $N$ noisy samples. Our theoretical contributions are threefold: (1) the model achieves minimax-optimal parameter complexityโup to logarithmic factors; (2) it simultaneously satisfies interpolation capability, sublinear fine-tuning complexity, and well-behaved Lipschitz regularization; and (3) the approximation error converges asymptotically to the noise-free UAT bound as sample size increases. Collectively, these results substantially bridge the longstanding disconnect between abstract approximation theory and empirical deep learning practice.
๐ Abstract
At its core, machine learning seeks to train models that reliably generalize beyond noisy observations; however, the theoretical vacuum in which state-of-the-art universal approximation theorems (UATs) operate isolates them from this goal, as they assume noiseless data and allow network parameters to be chosen freely, independent of algorithmic realism. This paper bridges that gap by introducing an architecture-specific randomized training algorithm that constructs a uniform approximator from $N$ noisy training samples on the $d$-dimensional cube $[0,1]^d$. Our trained neural networks attain the minimax-optimal quantity of extit{trainable} (non-random) parameters, subject to logarithmic factors which vanish under the idealized noiseless sampling assumed in classical UATs.
Additionally, our trained models replicate key behaviours of real-world neural networks, absent in standard UAT constructions, by: (1) exhibiting sub-linear parametric complexity when fine-tuning on structurally related and favourable out-of-distribution tasks, (2) exactly interpolating the training data, and (3) maintaining reasonable Lipschitz regularity (after the initial clustering attention layer). These properties bring state-of-the-art UATs closer to practical machine learning, shifting the central open question from algorithmic implementability with noisy samples to whether stochastic gradient descent can achieve comparable guarantees.