🤖 AI Summary
This study addresses the limited interpretability and controllability of state space models (SSMs) in language modeling. Through mechanistic interpretability analysis, the authors uncover—for the first time—an activation subspace bottleneck in Mamba-family SSMs that adversely affects performance. To mitigate this issue without fine-tuning, they propose a test-time scalar scaling intervention strategy. Furthermore, they introduce Stable-Mamba, a novel architecture that fundamentally restructures the bottleneck at its source. Experiments across five SSM variants and six benchmark datasets demonstrate that the proposed approach yields an average performance improvement of 8.27%, with Stable-Mamba exhibiting particularly strong gains on long-context tasks.
📝 Abstract
State-space models (SSMs) have emerged as an efficient strategy for building powerful language models, avoiding the quadratic complexity of computing attention in transformers. Despite their promise, the interpretability and steerability of modern SSMs remain relatively underexplored. We take a major step in this direction by identifying activation subspace bottlenecks in the Mamba family of SSM models using tools from mechanistic interpretability. We then introduce a test-time steering intervention that simply multiplies the activations of the identified bottlenecks by a scalar. Across 5 SSMs and 6 diverse benchmarks, this intervention improves performance by an average of 8.27%, without requiring any task-specific tuning. Finally, we validate that the identified bottlenecks are indeed hindering performance by modifying them to yield an architecture we call Stable-Mamba, which achieves long-context performance gains when retrained from scratch.