A Quantifier-Reversal Approximation Paradigm for Recurrent Neural Networks

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional neural network approximation requires customizing architecture and parameters for each target function and error tolerance ε. Method: This paper proposes a novel recurrent neural network (RNN) approximation paradigm: a single fixed-topology, fixed-weight RNN achieves arbitrary-precision ε-approximation of a target function solely by increasing its runtime; approximation error decays exponentially with time. The core innovation introduces, for the first time, a quantifier-inversion mechanism and clock-driven functional composition, enabling a single network to adapt autonomously to any ε. Hidden-state dynamics emulate affine transformations, linear combinations, and functional composition—hallmarks of deep ReLU networks—thereby realizing temporal computation and weight sharing. Results: For univariate polynomials, hidden-state dimension scales linearly with polynomial degree, drastically reducing memory overhead. This makes the approach particularly suitable for memory-constrained hardware where computational latency can be traded for accuracy.

Technology Category

Application Category

📝 Abstract
Classical neural network approximation results take the form: for every function $f$ and every error tolerance $epsilon>0$, one constructs a neural network whose architecture and weights depend on $epsilon$. This paper introduces a fundamentally different approximation paradigm that reverses this quantifier order. For each target function $f$, we construct a single recurrent neural network (RNN) with fixed topology and fixed weights that approximates $f$ to within any prescribed tolerance $epsilon>0$ when run for sufficiently many time steps. The key mechanism enabling this quantifier reversal is temporal computation combined with weight sharing: rather than increasing network depth, the approximation error is reduced solely by running the RNN longer. This yields exponentially decaying approximation error as a function of runtime while requiring storage of only a small, fixed set of weights. Such architectures are appealing for hardware implementations where memory is scarce and runtime is comparatively inexpensive. To initiate the systematic development of this novel approximation paradigm, we focus on univariate polynomials. Our RNN constructions emulate the structural calculus underlying deep feed-forward ReLU network approximation theory -- parallelization, linear combinations, affine transformations, and, most importantly, a clocked mechanism that realizes function composition within a single recurrent architecture. The resulting RNNs have size independent of the error tolerance $epsilon$ and hidden-state dimension linear in the degree of the polynomial.
Problem

Research questions and friction points this paper is trying to address.

Constructing single RNN with fixed weights for arbitrary precision approximation
Reversing quantifier order in neural network approximation paradigms
Enabling temporal computation to reduce error without increasing network depth
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single RNN with fixed topology and weights
Temporal computation reduces approximation error
Exponentially decaying error with fixed storage
🔎 Similar Papers
No similar papers found.
C
Clemens Hutter
Swiss National Bank, Börsenstrasse 15, 8001 Zürich, Switzerland
V
Valentin Abadie
ETH Zürich, Chair for Mathematical Information Science, Sternwartstrasse 7, 8092 Zürich, Switzerland
Helmut Bölcskei
Helmut Bölcskei
Professor of Mathematical Information Science, ETH Zurich
Machine Learning TheoryMathematical Signal ProcessingData ScienceStatistics