NeuralGrok: Accelerate Grokking by Neural Gradient Transformation

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the pervasive “grokking” phenomenon—i.e., delayed generalization following prolonged overfitting—in Transformers applied to arithmetic tasks. To tackle this, we propose a neural gradient transformation mechanism: a learnable MLP-based gradient modulation module, designed via bilevel optimization, that dynamically perceives and regulates parameter-wise gradient contributions. We further introduce Absolute Gradient Entropy (AGE), a novel metric that quantitatively characterizes the intrinsic trade-off between generalization capability and model complexity. Finally, we develop a stable, dimensionality-reduction–oriented training paradigm. Experiments demonstrate that our approach reduces the grokking latency by over 50% on average, significantly enhances training stability, and induces sustained model complexity reduction—outperforming conventional regularizers such as weight decay.

Technology Category

Application Category

📝 Abstract
Grokking is proposed and widely studied as an intricate phenomenon in which generalization is achieved after a long-lasting period of overfitting. In this work, we propose NeuralGrok, a novel gradient-based approach that learns an optimal gradient transformation to accelerate the generalization of transformers in arithmetic tasks. Specifically, NeuralGrok trains an auxiliary module (e.g., an MLP block) in conjunction with the base model. This module dynamically modulates the influence of individual gradient components based on their contribution to generalization, guided by a bilevel optimization algorithm. Our extensive experiments demonstrate that NeuralGrok significantly accelerates generalization, particularly in challenging arithmetic tasks. We also show that NeuralGrok promotes a more stable training paradigm, constantly reducing the model's complexity, while traditional regularization methods, such as weight decay, can introduce substantial instability and impede generalization. We further investigate the intrinsic model complexity leveraging a novel Absolute Gradient Entropy (AGE) metric, which explains that NeuralGrok effectively facilitates generalization by reducing the model complexity. We offer valuable insights on the grokking phenomenon of Transformer models, which encourages a deeper understanding of the fundamental principles governing generalization ability.
Problem

Research questions and friction points this paper is trying to address.

Accelerate generalization in transformers for arithmetic tasks
Reduce model complexity during training for better stability
Understand grokking phenomenon in Transformer models
Innovation

Methods, ideas, or system contributions that make the work stand out.

NeuralGrok accelerates generalization via gradient transformation
Uses bilevel optimization for dynamic gradient modulation
Reduces model complexity with Absolute Gradient Entropy
🔎 Similar Papers
No similar papers found.