🤖 AI Summary
Existing theoretical frameworks inadequately explain how structured representations emerge in finite-width ReLU neural networks, particularly for multi-context tasks.
Method: The authors establish, for the first time, dynamical equivalence between ReLU networks and gated deep linear networks under multi-context settings, leveraging nonlinear dynamical analysis, inductive bias modeling, and carefully designed controllable tasks.
Contribution/Results: They reveal that node reuse and learning-rate dynamics jointly drive the emergence of hidden-layer representations; moreover, increasing network depth or context count systematically enhances *mixed selectivity*—a non-decomposable yet highly structured latent representation. This work provides the first interpretable and predictive theory of feature learning for finite-width ReLU networks, removing restrictive assumptions such as infinite width, single-layer architectures, or unstructured data.
📝 Abstract
In spite of finite dimension ReLU neural networks being a consistent factor behind recent deep learning successes, a theory of feature learning in these models remains elusive. Currently, insightful theories still rely on assumptions including the linearity of the network computations, unstructured input data and architectural constraints such as infinite width or a single hidden layer. To begin to address this gap we establish an equivalence between ReLU networks and Gated Deep Linear Networks, and use their greater tractability to derive dynamics of learning. We then consider multiple variants of a core task reminiscent of multi-task learning or contextual control which requires both feature learning and nonlinearity. We make explicit that, for these tasks, the ReLU networks possess an inductive bias towards latent representations which are not strictly modular or disentangled but are still highly structured and reusable between contexts. This effect is amplified with the addition of more contexts and hidden layers. Thus, we take a step towards a theory of feature learning in finite ReLU networks and shed light on how structured mixed-selective latent representations can emerge due to a bias for node-reuse and learning speed.