🤖 AI Summary
This study investigates feature learning mechanisms during neural network convergence, proposing and empirically validating the “Convergence Feature Theorem” (FACT), which characterizes a self-consistent relationship among weight matrices, inputs, and loss gradients at convergence. Building upon FACT, we develop a theoretical framework and enhance the Recursive Feature Machine (RFM), yielding the novel FACT-RFM algorithm. FACT-RFM is the first artificial neural network method to reproduce modular arithmetic insight phenomena and exhibit phase-transition behavior in sparse parity learning. It integrates gradient descent with non-zero weight decay and jointly leverages forward/backward pass information to explicitly model the self-consistency equation of the feature matrix $W^ op W$. Experiments confirm that deep network training strictly adheres to FACT. On diverse tabular benchmarks, FACT-RFM significantly outperforms baselines and precisely captures critical feature learning dynamics.
📝 Abstract
A central challenge in deep learning theory is to understand how neural networks learn and represent features. To this end, we prove the Features at Convergence Theorem (FACT), which gives a self-consistency equation that neural network weights satisfy at convergence when trained with nonzero weight decay. For each weight matrix $W$, this equation relates the "feature matrix" $W^ op W$ to the set of input vectors passed into the matrix during forward propagation and the loss gradients passed through it during backpropagation. We validate this relation empirically, showing that neural features indeed satisfy the FACT at convergence. Furthermore, by modifying the "Recursive Feature Machines" of Radhakrishnan et al. 2024 so that they obey the FACT, we arrive at a new learning algorithm, FACT-RFM. FACT-RFM achieves high performance on tabular data and captures various feature learning behaviors that occur in neural network training, including grokking in modular arithmetic and phase transitions in learning sparse parities.