Mechanistic evaluation of Transformers and state space models

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
State space models (SSMs) exhibit inconsistent performance on associative recall (AR) tasks, yet the architectural mechanisms underlying this instability remain poorly understood. Method: We conduct the first mechanistic, architecture-level comparison across large language model families—contrasting Transformers with prominent SSMs (Mamba, Based, H3, Hyena)—using a novel hierarchical synthetic task, Associative Treecall (ATR), grounded in causal intervention, induction-head probing, and probabilistic context-free grammar (PCFG) modeling. Contribution/Results: Only Transformers, Based, and Mamba succeed on AR and ATR; critically, they rely on distinct mechanisms: Transformers and Based leverage context-sensitive induction heads, whereas Mamba depends on short-convolutional components. All three reproduce consistent mechanistic patterns in ATR. This work establishes a direct link between architectural primitives and memory capability, revealing how structural differences govern in-context learning behavior. It further introduces an interpretability-driven paradigm for principled, mechanism-aware model design.

Technology Category

Application Category

📝 Abstract
State space models (SSMs) for language modelling promise an efficient and performant alternative to quadratic-attention Transformers, yet show variable performance on recalling basic information from the context. While performance on synthetic tasks like Associative Recall (AR) can point to this deficiency, behavioural metrics provide little information as to why--on a mechanistic level--certain architectures fail and others succeed. To address this, we conduct experiments on AR and find that only Transformers and Based SSM models fully succeed at AR, with Mamba a close third, whereas the other SSMs (H3, Hyena) fail. We then use causal interventions to explain why. We find that Transformers and Based learn to store key-value associations in-context using induction heads. By contrast, the SSMs compute these associations only at the last state, with only Mamba succeeding because of its short convolution component. To extend and deepen these findings, we introduce Associative Treecall (ATR), a synthetic task similar to AR based on PCFG induction. ATR introduces language-like hierarchical structure into the AR setting. We find that all architectures learn the same mechanism as they did for AR, and the same three models succeed at the task. These results reveal that architectures with similar accuracy may still have substantive differences, motivating the adoption of mechanistic evaluations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating why SSMs vary in recalling context information
Comparing Transformers and SSMs on synthetic tasks like AR
Introducing ATR to test hierarchical structure learning in models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformers and Based SSM models succeed at AR
Mamba succeeds due to short convolution component
Introduce Associative Treecall for hierarchical structure evaluation
🔎 Similar Papers
No similar papers found.