🤖 AI Summary
In continual learning, neural networks often suffer from “primacy bias,” where early-task representations become prematurely固化, impairing generalization to subsequent tasks. To address this, we propose a novel feature-learning dynamic regulation method that increases the effective learning rate by adaptively scaling it with the ratio of parameter update norm to parameter norm—thereby promoting continuous representation updating and broader task coverage. This mechanism is the first to transplant the implicit delayed generalization phenomenon observed in grokking into non-stationary continual learning settings, systematically mitigating prior-induced bias. We validate our approach across grokking, warm-start, and reinforcement learning benchmarks, demonstrating significant improvements in feature diversity and cross-task generalization. Our method provides both a new conceptual lens and a practical pathway for enhancing representational plasticity in continual learning.
📝 Abstract
In continual learning problems, it is often necessary to overwrite components of a neural network's learned representation in response to changes in the data stream; however, neural networks often exhibit primacy bias, whereby early training data hinders the network's ability to generalize on later tasks. While feature-learning dynamics of nonstationary learning problems are not well studied, the emergence of feature-learning dynamics is known to drive the phenomenon of grokking, wherein neural networks initially memorize their training data and only later exhibit perfect generalization. This work conjectures that the same feature-learning dynamics which facilitate generalization in grokking also underlie the ability to overwrite previous learned features as well, and methods which accelerate grokking by facilitating feature-learning dynamics are promising candidates for addressing primacy bias in non-stationary learning problems. We then propose a straightforward method to induce feature-learning dynamics as needed throughout training by increasing the effective learning rate, i.e. the ratio between parameter and update norms. We show that this approach both facilitates feature-learning and improves generalization in a variety of settings, including grokking, warm-starting neural network training, and reinforcement learning tasks.