Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks

📅 2025-03-08
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing theoretical frameworks inadequately explain how structured representations emerge in finite-width ReLU neural networks, particularly for multi-context tasks. Method: The authors establish, for the first time, dynamical equivalence between ReLU networks and gated deep linear networks under multi-context settings, leveraging nonlinear dynamical analysis, inductive bias modeling, and carefully designed controllable tasks. Contribution/Results: They reveal that node reuse and learning-rate dynamics jointly drive the emergence of hidden-layer representations; moreover, increasing network depth or context count systematically enhances *mixed selectivity*—a non-decomposable yet highly structured latent representation. This work provides the first interpretable and predictive theory of feature learning for finite-width ReLU networks, removing restrictive assumptions such as infinite width, single-layer architectures, or unstructured data.

Technology Category

Application Category

📝 Abstract
In spite of finite dimension ReLU neural networks being a consistent factor behind recent deep learning successes, a theory of feature learning in these models remains elusive. Currently, insightful theories still rely on assumptions including the linearity of the network computations, unstructured input data and architectural constraints such as infinite width or a single hidden layer. To begin to address this gap we establish an equivalence between ReLU networks and Gated Deep Linear Networks, and use their greater tractability to derive dynamics of learning. We then consider multiple variants of a core task reminiscent of multi-task learning or contextual control which requires both feature learning and nonlinearity. We make explicit that, for these tasks, the ReLU networks possess an inductive bias towards latent representations which are not strictly modular or disentangled but are still highly structured and reusable between contexts. This effect is amplified with the addition of more contexts and hidden layers. Thus, we take a step towards a theory of feature learning in finite ReLU networks and shed light on how structured mixed-selective latent representations can emerge due to a bias for node-reuse and learning speed.
Problem

Research questions and friction points this paper is trying to address.

Develops theory for feature learning in finite ReLU networks.
Explores structured mixed-selectivity in latent representations.
Analyzes node-reuse and learning speed biases in networks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivalence between ReLU and Gated Deep Linear Networks
Inductive bias towards structured latent representations
Amplified structured mixed-selectivity with more layers
🔎 Similar Papers
No similar papers found.
Devon Jarvis
Devon Jarvis
University of the Witwatersrand
Deep Learning TheoryComputational Neuroscience
R
Richard Klein
School of Computer Science and Applied Mathematics, University of the Witwatersrand; Machine Intelligence and Neural Discovery Institute, University of the Witwatersrand
Benjamin Rosman
Benjamin Rosman
Professor at the University of the Witwatersrand, South Africa
RoboticsArtificial IntelligenceMachine LearningDecision MakingReinforcement Learning
A
Andrew M. Saxe
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre, UCL; CIFAR Azrieli Global Scholar, CIFAR