🤖 AI Summary
This work addresses the fundamental question of whether statistical physics can characterize the feature learning capability of deep neural networks. We study multilayer perceptrons whose width scales proportionally with input dimension and whose parameter count is comparable to the sample size (the interpolation regime). Employing a teacher–student framework combined with Bayesian optimal analysis, we establish, for the first time in a finite-width, non-kernel regime, the emergence mechanism of feature learning. We identify a non-uniform propagation of specialization—from shallow to deep layers—and demonstrate that deeper target features are inherently harder to learn. We derive precise sample-size thresholds required for optimal performance, uncover multiple learning phase transitions, and predict dynamical barriers that trap training in suboptimal solutions. Our results transcend the analytical limitations of narrow-network and kernel-method approaches, providing a tractable, analytically solvable statistical-physical paradigm for deep learning theory.
📝 Abstract
For three decades statistical physics has been providing a framework to analyse neural networks. A long-standing question remained on its capacity to tackle deep learning models capturing rich feature learning effects, thus going beyond the narrow networks or kernel methods analysed until now. We positively answer through the study of the supervised learning of a multi-layer perceptron. Importantly, (i) its width scales as the input dimension, making it more prone to feature learning than ultra wide networks, and more expressive than narrow ones or with fixed embedding layers; and (ii) we focus on the challenging interpolation regime where the number of trainable parameters and data are comparable, which forces the model to adapt to the task. We consider the matched teacher-student setting. It provides the fundamental limits of learning random deep neural network targets and helps in identifying the sufficient statistics describing what is learnt by an optimally trained network as the data budget increases. A rich phenomenology emerges with various learning transitions. With enough data optimal performance is attained through model's"specialisation"towards the target, but it can be hard to reach for training algorithms which get attracted by sub-optimal solutions predicted by the theory. Specialisation occurs inhomogeneously across layers, propagating from shallow towards deep ones, but also across neurons in each layer. Furthermore, deeper targets are harder to learn. Despite its simplicity, the Bayesian-optimal setting provides insights on how the depth, non-linearity and finite (proportional) width influence neural networks in the feature learning regime that are potentially relevant way beyond it.