Towards Empirical Interpretation of Internal Circuits and Properties in Grokked Transformers on Modular Polynomials

📅 2024-02-26
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the mechanistic underpinnings of “grokking”—the delayed emergence of generalization—in Transformers performing modular polynomial arithmetic. Method: We design modular arithmetic and polynomial tasks, and employ Fourier spectral analysis, multi-task mixed training, and ablation studies to introduce two novel metrics: *Fourier frequency density* and *coefficient ratio*. Results: (1) Distinct modular operations induce operation-specific Fourier representations in the latent space; (2) Representation transfer is limited, occurring only under specific structural conditions—e.g., linear combinations—enabling cross-task generalization; (3) Multi-task co-training triggers “co-grokking”, substantially accelerating generalization. Crucially, we provide the first empirical evidence that grokking in polynomial tasks manifests as a superposition of Fourier modes, revealing a structured, interpretable frequency-domain mechanism. This advances the understanding of Transformers’ symbolic reasoning capabilities through a principled spectral lens.

Technology Category

Application Category

📝 Abstract
Grokking has been actively explored to reveal the mystery of delayed generalization and identifying interpretable representations and algorithms inside the grokked models is a suggestive hint to understanding its mechanism. Grokking on modular addition has been known to implement Fourier representation and its calculation circuits with trigonometric identities in Transformers. Considering the periodicity in modular arithmetic, the natural question is to what extent these explanations and interpretations hold for the grokking on other modular operations beyond addition. For a closer look, we first hypothesize that any modular operations can be characterized with distinctive Fourier representation or internal circuits, grokked models obtain common features transferable among similar operations, and mixing datasets with similar operations promotes grokking. Then, we extensively examine them by learning Transformers on complex modular arithmetic tasks, including polynomials. Our Fourier analysis and novel progress measure for modular arithmetic, Fourier Frequency Density and Fourier Coefficient Ratio, characterize distinctive internal representations of grokked models per modular operation; for instance, polynomials often result in the superposition of the Fourier components seen in elementary arithmetic, but clear patterns do not emerge in challenging non-factorizable polynomials. In contrast, our ablation study on the pre-grokked models reveals that the transferability among the models grokked with each operation can be only limited to specific combinations, such as from elementary arithmetic to linear expressions. Moreover, some multi-task mixtures may lead to co-grokking -- where grokking simultaneously happens for all the tasks -- and accelerate generalization, while others may not find optimal solutions. We provide empirical steps towards the interpretability of internal circuits.
Problem

Research questions and friction points this paper is trying to address.

Grokked Transformers
Polynomial Kernels
Mathematical Operations Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grokked Transformers
Mathematical Operations
Learning Dynamics
🔎 Similar Papers
No similar papers found.