🤖 AI Summary
This work systematically investigates the effectiveness and efficiency of State Space Models (SSMs) for long-sequence modeling, benchmarking them against Transformers. To address the lack of a unified theoretical framework, we propose the first comprehensive taxonomy of SSM evolution—categorizing mainstream paradigms into classical SSMs, structured SSMs (e.g., S4), and selective SSMs (e.g., Mamba)—and identify three core mechanisms driving performance gains: HiPPO-theory-backed linear time-invariant dynamics, low-rank structured parameterization, and hardware-aware selective scanning. Integrating motivations, mathematical formulations, paradigm comparisons, and representative applications, we construct the first holistic, hierarchically organized SSM knowledge graph. This synthesis fills a critical gap in systematic theoretical survey literature, providing both a rigorous benchmark and methodological guidance for future research and industrial deployment of SSMs.
📝 Abstract
State Space Models (SSMs) have emerged as a promising alternative to the popular transformer-based models and have been increasingly gaining attention. Compared to transformers, SSMs excel at tasks with sequential data or longer contexts, demonstrating comparable performances with significant efficiency gains. In this survey, we provide a coherent and systematic overview for SSMs, including their theoretical motivations, mathematical formulations, comparison with existing model classes, and various applications. We divide the SSM series into three main sections, providing a detailed introduction to the original SSM, the structured SSM represented by S4, and the selective SSM typified by Mamba. We put an emphasis on technicality, and highlight the various key techniques introduced to address the effectiveness and efficiency of SSMs. We hope this manuscript serves as an introduction for researchers to explore the theoretical foundations of SSMs.