🤖 AI Summary
This study investigates whether the “generalization circuits” formed in Transformers after grokking on compositional reasoning tasks genuinely enhance downstream performance and whether their computational cost is justified. Through mechanistic interpretability analyses, controlled training experiments, and comparisons of inference pathways, the authors find that grokking does not introduce a novel reasoning paradigm but instead embeds memorized atomic facts into pre-existing reasoning pathways. High accuracy and the emergence of specific circuits are shown to be decoupled, and grokked models employ the same inference mechanisms as non-grokked models on in-distribution data. Moreover, grokked models exhibit limited generalization on out-of-distribution tasks and struggle with transferring to new knowledge. These findings suggest that grokking does not necessarily confer meaningful generalization benefits.
📝 Abstract
While Large Language Models (LLMs) excel at factual retrieval, they often struggle with the"curse of two-hop reasoning"in compositional tasks. Recent research suggests that parameter-sharing transformers can bridge this gap by forming a"Generalization Circuit"during a prolonged"grokking"phase. A fundamental question arises: Is a grokked model superior to its non-grokked counterparts on downstream tasks? Furthermore, is the extensive computational cost of waiting for the grokking phase worthwhile? In this work, we conduct a mechanistic study to evaluate the Generalization Circuit's role in knowledge assimilation and transfer. We demonstrate that: (i) The inference paths established by non-grokked and grokked models for in-distribution compositional queries are identical. This suggests that the"Generalization Circuit"does not represent the sudden acquisition of a new reasoning paradigm. Instead, we argue that grokking is the process of integrating memorized atomic facts into an naturally established reasoning path. (ii) Achieving high accuracy on unseen cases after prolonged training and the formation of a certain reasoning path are not bound; they can occur independently under specific data regimes. (iii) Even a mature circuit exhibits limited transferability when integrating new knowledge, suggesting that"grokked"Transformers do not achieve a full mastery of compositional logic.