Weight Tying Biases Token Embeddings Towards the Output Space

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how weight tying in language models biases word embeddings toward output prediction, thereby undermining their effectiveness as input representations. Through lens-based analysis of gradient dynamics, the authors find that during early training, output-side gradients dominate embedding updates, impairing the contribution of shallow layers to the residual stream. To establish causality, they propose an input gradient scaling method that mitigates this bias without altering model architecture. Experimental results demonstrate that modulating input gradients effectively restores the dual role of embeddings—as both input encoders and output predictors—offering a novel approach to improve training efficiency in small-scale large language models.
📝 Abstract
Weight tying, i.e. sharing parameters between input and output embedding matrices, is common practice in language model design, yet its impact on the learned embedding space remains poorly understood. In this paper, we show that tied embedding matrices align more closely with output (unembedding) matrices than with input embeddings of comparable untied models, indicating that the shared matrix is shaped primarily for output prediction rather than input representation. This unembedding bias arises because output gradients dominate early in training. Using tuned lens analysis, we show this negatively affects early-layer computations, which contribute less effectively to the residual stream. Scaling input gradients during training reduces this bias, providing causal evidence for the role of gradient imbalance. This is mechanistic evidence that weight tying optimizes the embedding matrix for output prediction, compromising its role in input representation. These results help explain why weight tying can harm performance at scale and have implications for training smaller LLMs, where the embedding matrix contributes substantially to total parameter count.
Problem

Research questions and friction points this paper is trying to address.

weight tying
embedding bias
output prediction
input representation
gradient imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

weight tying
embedding bias
gradient imbalance
unembedding
tuned lens
🔎 Similar Papers
No similar papers found.