On the Neural Feature Ansatz for Deep Neural Networks

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the feature learning mechanism in deep neural networks, aiming to validate and generalize the Neural Feature Hypothesis (NFA)—the conjecture that the Gram matrix of first-layer weights asymptotically scales with the outer product of input gradients. Method: We extend NFA systematically to L-layer networks, employing gradient flow dynamics and mean-field theory within a novel Average Gradient Outer Product (AGOP) analytical framework; we conduct comprehensive numerical experiments across optimizers, initialization schemes, and regularization settings. Contribution/Results: We prove that NFA holds asymptotically under balanced initialization with exponent α = 1/L; remarkably, even under unbalanced initialization, weight decay restores its asymptotic validity. Our analysis not only confirms the depth-dependent scaling law but also constructs explicit counterexamples where NFA fails in nonlinear networks, thereby precisely characterizing its regime of validity.

Technology Category

Application Category

📝 Abstract
Understanding feature learning is an important open question in establishing a mathematical foundation for deep neural networks. The Neural Feature Ansatz (NFA) states that after training, the Gram matrix of the first-layer weights of a deep neural network is proportional to some power $α>0$ of the average gradient outer product (AGOP) of this network with respect to its inputs. Assuming gradient flow dynamics with balanced weight initialization, the NFA was proven to hold throughout training for two-layer linear networks with exponent $α= 1/2$ (Radhakrishnan et al., 2024). We extend this result to networks with $L geq 2$ layers, showing that the NFA holds with exponent $α= 1/L$, thus demonstrating a depth dependency of the NFA. Furthermore, we prove that for unbalanced initialization, the NFA holds asymptotically through training if weight decay is applied. We also provide counterexamples showing that the NFA does not hold for some network architectures with nonlinear activations, even when these networks fit arbitrarily well the training data. We thoroughly validate our theoretical results through numerical experiments across a variety of optimization algorithms, weight decay rates and initialization schemes.
Problem

Research questions and friction points this paper is trying to address.

Extending the Neural Feature Ansatz to deep networks with depth dependency
Proving NFA holds asymptotically for unbalanced initialization with weight decay
Providing counterexamples where NFA fails in nonlinear network architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Neural Feature Ansatz to multi-layer networks
Proves depth-dependent exponent in feature learning
Validates theory with numerical experiments across parameters
🔎 Similar Papers
No similar papers found.
E
Edward Tansley
Mathematical Institute, Woodstock Road, University of Oxford, Oxford, UK, OX2 6GG
E
Estelle Massart
ICTEAM Institute, UCLouvain, Euler Building, Avenue Georges Lemaître, 4 - bte L4.05.01, Louvain-la-Neuve, B - 1348, Belgium
Coralia Cartis
Coralia Cartis
University of Oxford
OptimizationNumerical AnalysisComplexityCompressed Sensing