🤖 AI Summary
This work investigates the minimal subnetwork within Transformer language models that supports fundamental bigram prediction—predicting the next token based solely on the current token.
Method: Leveraging subnetwork identification, residual stream activation analysis, and mechanistic interpretability techniques, we systematically isolate critical components across model layers.
Contribution/Results: We stably identify, for the first time in models up to 1B parameters, a class of “bigram subnetworks” concentrated almost exclusively in the first-layer MLP. Constituting less than 0.2% of total parameters, this subnetwork is both necessary and sufficient for bigram prediction. We introduce the “minimal circuit construction” paradigm—replacing conventional ablation—as a principled approach to isolating functional circuits. We verify that this subnetwork drives the representational transformation from current to next token, and uncover that the first-layer MLP enables efficient prediction by inducing sharp alignment among token embeddings. These findings provide both theoretical grounding and empirical evidence for understanding language model mechanisms and designing parameter-efficient architectures.
📝 Abstract
In Transformer language models, activation vectors transform from current token embeddings to next token predictions as they pass through the model. To isolate a minimal form of this transformation, we identify language model subnetworks that make bigram predictions, naive next token predictions based only on the current token. We find that bigram subnetworks can be found in fully trained language models up to 1B parameters, and these subnetworks are critical for model performance even when they consist of less than 0.2% of model parameters. Bigram subnetworks are concentrated in the first Transformer MLP layer, and they overlap significantly with subnetworks trained to optimally prune a given model. Mechanistically, the bigram subnetworks often recreate a pattern from the full models where the first layer induces a sharp change that aligns activations with next token predictions rather than current token representations. Our results demonstrate that bigram subnetworks comprise a minimal subset of parameters that are both necessary and sufficient for basic next token predictions in language models, and they help drive the transformation from current to next token activations in the residual stream. These subnetworks can lay a foundation for studying language model circuits by building up from a minimal circuit rather than the traditional approach of ablating circuits from a full model.