Attention Retrieves, MLP Memorizes: Disentangling Trainable Components in the Transformer

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the specific functional contributions of self-attention in Transformers to algorithmic tasks—namely mathematical reasoning, memory, and retrieval—and clarifies its division of labor with MLPs. To isolate learnable mechanisms, the authors systematically freeze key components (Q/K projectors, MLPs) and introduce MixiT, a variant with fixed random attention weights. Key findings are: (1) MixiT matches fully trained models on basic arithmetic and strong-memory tasks, demonstrating for the first time that random attention suffices for these capabilities; (2) freezing Q/K projectors preserves induced attention heads and competitive language modeling performance, indicating robustness to input projection learning; (3) input-dependent attention is indispensable for retrieval tasks. Collectively, results reveal that self-attention’s core role lies in structured information routing—not merely in learned weight optimization—highlighting its architectural, rather than purely parametric, utility.

Technology Category

Application Category

📝 Abstract
The Transformer architecture is central to the success of modern Large Language Models (LLMs), in part due to its surprising ability to perform a wide range of algorithmic tasks -- including mathematical reasoning, memorization, and retrieval -- using only gradient-based training on next-token prediction. While the core component of a Transformer is the self-attention mechanism, we question how much, and which aspects, of the performance gains can be attributed to it. To this end, we compare standard Transformers to variants in which either the multi-layer perceptron (MLP) layers or the attention projectors (queries and keys) are frozen at initialization. To further isolate the contribution of attention, we introduce MixiT -- the Mixing Transformer -- a simplified, principled model in which the attention coefficients are entirely random and fixed at initialization, eliminating any input-dependent computation or learning in attention. Surprisingly, we find that MixiT matches the performance of fully trained Transformers on various algorithmic tasks, especially those involving basic arithmetic or focusing heavily on memorization. For retrieval-based tasks, we observe that having input-dependent attention coefficients is consistently beneficial, while MixiT underperforms. We attribute this failure to its inability to form specialized circuits such as induction heads -- a specific circuit known to be crucial for learning and exploiting repeating patterns in input sequences. Even more interestingly, we find that attention with frozen key and query projectors is not only able to form induction heads, but can also perform competitively on language modeling. Our results underscore the importance of architectural heterogeneity, where distinct components contribute complementary inductive biases crucial for solving different classes of tasks.
Problem

Research questions and friction points this paper is trying to address.

Disentangle roles of attention and MLP in Transformers
Assess impact of frozen attention or MLP components
Evaluate random attention in simplified Transformer models
Innovation

Methods, ideas, or system contributions that make the work stand out.

MixiT uses random fixed attention coefficients
Frozen MLP layers maintain performance
Frozen attention projectors form induction heads
🔎 Similar Papers
No similar papers found.