Structured Sparse Transition Matrices to Enable State Tracking in State-Space Models

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing state space models (SSMs) face a trade-off between computational efficiency and representational capacity: structured transition matrices (e.g., HiPPO) enable efficient computation but cannot exactly simulate finite-state automata (FSAs) with *N* states, whereas unstructured dense matrices achieve optimal expressivity at prohibitive computational cost. This work proposes PD-SSM, a novel SSM architecture employing a structured sparse parameterization—specifically, the product of a column-wise one-hot matrix and a complex diagonal matrix. PD-SSM is the first SSM capable of *exact*, layer-optimal, and state-size-optimal simulation of *N*-state FSAs, thereby attaining strictly greater theoretical expressivity than prior SSMs. It integrates parallel scanning, BIBO-stable design, linear readout, and seamless embedding into a Transformer-SSM hybrid framework. Experiments demonstrate substantial gains over mainstream SSM variants on FSA state tracking, competitive performance with neural ODEs on multivariate time-series classification, and successful modeling of intricate state transitions in natural language.

Technology Category

Application Category

📝 Abstract
Modern state-space models (SSMs) often utilize transition matrices which enable efficient computation but pose restrictions on the model's expressivity, as measured in terms of the ability to emulate finite-state automata (FSA). While unstructured transition matrices are optimal in terms of expressivity, they come at a prohibitively high compute and memory cost even for moderate state sizes. We propose a structured sparse parametrization of transition matrices in SSMs that enables FSA state tracking with optimal state size and depth, while keeping the computational cost of the recurrence comparable to that of diagonal SSMs. Our method, PD-SSM, parametrizes the transition matrix as the product of a column one-hot matrix ($P$) and a complex-valued diagonal matrix ($D$). Consequently, the computational cost of parallel scans scales linearly with the state size. Theoretically, the model is BIBO-stable and can emulate any $N$-state FSA with one layer of dimension $N$ and a linear readout of size $N imes N$, significantly improving on all current structured SSM guarantees. Experimentally, the model significantly outperforms a wide collection of modern SSM variants on various FSA state tracking tasks. On multiclass time-series classification, the performance is comparable to that of neural controlled differential equations, a paradigm explicitly built for time-series analysis. Finally, we integrate PD-SSM into a hybrid Transformer-SSM architecture and demonstrate that the model can effectively track the states of a complex FSA in which transitions are encoded as a set of variable-length English sentences. The code is available at https://github.com/IBM/expressive-sparse-state-space-model
Problem

Research questions and friction points this paper is trying to address.

Enables finite-state automata state tracking with optimal state size
Reduces computational cost while maintaining expressivity in SSMs
Achieves BIBO-stability and emulates N-state FSAs efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured sparse parametrization of transition matrices
Product of column one-hot and diagonal matrices
Enables finite-state automata tracking with optimal state size
A
Aleksandar Terzić
IBM Research – Zurich, Department of Computer Science, ETH Zürich
N
Nicolas Menet
IBM Research – Zurich, Department of Computer Science, ETH Zürich
Michael Hersche
Michael Hersche
Research Scientist, IBM Research-Zurich
Machine LearningVector-symbolic Architectures
T
Thomas Hofmann
Department of Computer Science, ETH Zürich
Abbas Rahimi
Abbas Rahimi
Research Staff Member, IBM Research-Zurich
Machine ReasoningNeurosymbolic AIAI HardwareHW/SW CodesignEmbedded Systems