🤖 AI Summary
This work theoretically characterizes multi-scale feature learning in neural networks, unifying the understanding of kernel scaling and data-driven kernel adaptation—and their respective expressive power and inductive biases—across training regimes from the mean-field to the standard (NTK) scaling limit.
Method: We propose the first unified analytical framework spanning all scaling regimes; prove, for the first time, that under mean prediction, kernel adaptation is equivalent to effective kernel scaling while preserving directional feature learning; and derive a statistically exact closed-form expression for network outputs across the full scaling spectrum via statistical mechanics, continuum approximation, and higher-order saddle-point analysis.
Contribution/Results: Our core contribution is a rigorous theory of multi-scale adaptive feature learning that bridges long-standing conceptual divides between scaling paradigms, revealing the precise mechanisms of feature emergence and fundamental limits of expressivity—thereby providing a unified foundation for understanding deep learning principles.
📝 Abstract
Theoretically describing feature learning in neural networks is crucial for understanding their expressive power and inductive biases, motivating various approaches. Some approaches describe network behavior after training through a simple change in kernel scale from initialization, resulting in a generalization power comparable to a Gaussian process. Conversely, in other approaches training results in the adaptation of the kernel to the data, involving complex directional changes to the kernel. While these approaches capture different facets of network behavior, their relationship and respective strengths across scaling regimes remains an open question. This work presents a theoretical framework of multi-scale adaptive feature learning bridging these approaches. Using methods from statistical mechanics, we derive analytical expressions for network output statistics which are valid across scaling regimes and in the continuum between them. A systematic expansion of the network's probability distribution reveals that mean-field scaling requires only a saddle-point approximation, while standard scaling necessitates additional correction terms. Remarkably, we find across regimes that kernel adaptation can be reduced to an effective kernel rescaling when predicting the mean network output of a linear network. However, even in this case, the multi-scale adaptive approach captures directional feature learning effects, providing richer insights than what could be recovered from a rescaling of the kernel alone.