🤖 AI Summary
This study investigates how the allocation strategy of a fixed number of learnable weights affects neural network expressivity under parameter constraints—such as biological plausibility or hardware limitations. We propose a teacher–student knowledge distillation framework to establish a quantitative expressivity benchmark and conduct theoretical analysis for linear recurrent and feedforward networks, linking weight allocation to matrix rank and subspace coverage; we derive necessary and sufficient conditions for extremal expressivity. To our knowledge, this is the first systematic characterization of the quantitative relationship between allocation strategies and expressivity, accompanied by a heuristic estimation principle for suboptimal allocations, extended to ReLU networks. Experiments demonstrate that widely dispersed weight allocation significantly enhances expressivity; theoretical predictions achieve over 92% accuracy; and our findings bridge biological synaptic plasticity mechanisms with efficient deep learning architecture design.
📝 Abstract
In traditional machine learning, models are defined by a set of parameters, which are optimized to perform specific tasks. In neural networks, these parameters correspond to the synaptic weights. However, in reality, it is often infeasible to control or update all weights. This challenge is not limited to artificial networks but extends to biological networks, such as the brain, where the extent of distributed synaptic weight modification during learning remains unclear. Motivated by these insights, we theoretically investigate how different allocations of a fixed number of learnable weights influence the capacity of neural networks. Using a teacher-student setup, we introduce a benchmark to quantify the expressivity associated with each allocation. We establish conditions under which allocations have maximal or minimal expressive power in linear recurrent neural networks and linear multi-layer feedforward networks. For suboptimal allocations, we propose heuristic principles to estimate their expressivity. These principles extend to shallow ReLU networks as well. Finally, we validate our theoretical findings with empirical experiments. Our results emphasize the critical role of strategically distributing learnable weights across the network, showing that a more widespread allocation generally enhances the network's expressive power.