🤖 AI Summary
Existing state space models (SSMs) face a trade-off between computational efficiency and representational capacity: structured transition matrices (e.g., HiPPO) enable efficient computation but cannot exactly simulate finite-state automata (FSAs) with *N* states, whereas unstructured dense matrices achieve optimal expressivity at prohibitive computational cost. This work proposes PD-SSM, a novel SSM architecture employing a structured sparse parameterization—specifically, the product of a column-wise one-hot matrix and a complex diagonal matrix. PD-SSM is the first SSM capable of *exact*, layer-optimal, and state-size-optimal simulation of *N*-state FSAs, thereby attaining strictly greater theoretical expressivity than prior SSMs. It integrates parallel scanning, BIBO-stable design, linear readout, and seamless embedding into a Transformer-SSM hybrid framework. Experiments demonstrate substantial gains over mainstream SSM variants on FSA state tracking, competitive performance with neural ODEs on multivariate time-series classification, and successful modeling of intricate state transitions in natural language.
📝 Abstract
Modern state-space models (SSMs) often utilize transition matrices which enable efficient computation but pose restrictions on the model's expressivity, as measured in terms of the ability to emulate finite-state automata (FSA). While unstructured transition matrices are optimal in terms of expressivity, they come at a prohibitively high compute and memory cost even for moderate state sizes. We propose a structured sparse parametrization of transition matrices in SSMs that enables FSA state tracking with optimal state size and depth, while keeping the computational cost of the recurrence comparable to that of diagonal SSMs. Our method, PD-SSM, parametrizes the transition matrix as the product of a column one-hot matrix ($P$) and a complex-valued diagonal matrix ($D$). Consequently, the computational cost of parallel scans scales linearly with the state size. Theoretically, the model is BIBO-stable and can emulate any $N$-state FSA with one layer of dimension $N$ and a linear readout of size $N imes N$, significantly improving on all current structured SSM guarantees. Experimentally, the model significantly outperforms a wide collection of modern SSM variants on various FSA state tracking tasks. On multiclass time-series classification, the performance is comparable to that of neural controlled differential equations, a paradigm explicitly built for time-series analysis. Finally, we integrate PD-SSM into a hybrid Transformer-SSM architecture and demonstrate that the model can effectively track the states of a complex FSA in which transitions are encoded as a set of variable-length English sentences. The code is available at https://github.com/IBM/expressive-sparse-state-space-model