Expressivity of Neural Networks with Random Weights and Learned Biases

📅 2024-07-01
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the functional approximation capability of feedforward and recurrent neural networks under the constraint that only biases are optimized while weights remain randomly fixed. Method: Theoretically, we employ covering number analysis to establish that a single-layer feedforward network with learned biases alone can densely approximate any continuous function; similarly, an RNN with bias-only optimization achieves uniform approximation of Lipschitz-continuous dynamical systems. Experimentally, we evaluate this paradigm on multi-task classification (MNIST, CIFAR-10) and chaotic trajectory prediction (Lorenz system). Results: The bias-only training achieves performance comparable to full-parameter optimization. This study provides the first dual theoretical and empirical demonstration that bias plasticity alone—without synaptic weight updates—suffices for universal multi-task approximation, revealing a novel mechanism of neural dynamical plasticity under static synaptic connectivity.

Technology Category

Application Category

📝 Abstract
Landmark universal function approximation results for neural networks with trained weights and biases provided impetus for the ubiquitous use of neural networks as learning models in Artificial Intelligence (AI) and neuroscience. Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned by tuning smaller subsets of parameters, for example the output weights, within randomly initialized networks. Motivated by the fact that biases can be interpreted as biologically plausible mechanisms for adjusting unit outputs in neural networks, such as tonic inputs or activation thresholds, we investigate the expressivity of neural networks with random weights where only biases are optimized. We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only. We further show that an equivalent result holds for recurrent neural networks predicting dynamical system trajectories. Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights, as well as for AI, where they shed light on multi-task methods such as bias fine-tuning and unit masking.
Problem

Research questions and friction points this paper is trying to address.

Exploring universal approximation with only learned biases in neural networks
Investigating fixed random weights' role in function approximation
Validating bias-based tuning for dynamics in neuroscience and AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fixed random weights with learned biases
Approximates continuous functions dynamically
Applies to feedforward and recurrent networks
E
Ezekiel Williams
Mathematics and Statistics, Université de Montréal, Montréal, Québec, Canada
A
Avery Hee-Woon Ryoo
Computer Science, Université de Montréal, Montréal, Québec, Canada
Thomas Jiralerspong
Thomas Jiralerspong
Graduate Student - Computer Science (AI option) - Mila/Université de Montréal
deep learningreinforcement learninggenerative modelsbrain inspired AIAI for good
A
Alexandre Payeur
Mathematics and Statistics, Université de Montréal, Montréal, Québec, Canada
M
M. Perich
Neuroscience, Université de Montréal, Montréal, Québec, Canada
Luca Mazzucato
Luca Mazzucato
Biology, Physics, and Mathematics, University of Oregon, Eugene, Oregon, United States
Guillaume Lajoie
Guillaume Lajoie
Professor, Mila & Université de Montréal
AIdynamical systemscomputation neurosciencenetwork dynamicsmachine learning theory