🤖 AI Summary
Existing work lacks a theoretical understanding of the high-performance mechanisms of state space models (SSMs), particularly regarding gradient optimization dynamics and long-range dependency learning. Method: We establish a gradient dynamical analysis framework, revealing for the first time the decisive role of memory capacity in guiding parameter update directions; prove the theoretical equivalence between S4 and its diagonalized variant; identify an intrinsic trade-off between memory length and precision; and propose an “initialization-as-design” paradigm—constructing fixed recurrent weight structures via controllable initialization, bypassing conventional adaptive updates. Contribution/Results: Theoretically and empirically, our approach significantly improves long-range memory modeling efficiency, achieving faster convergence and superior or competitive performance on language modeling and sequence forecasting tasks. It provides a novel, interpretable training pathway for SSMs.
📝 Abstract
State space models (SSMs) have gained attention by showing potential to outperform Transformers. However, previous studies have not sufficiently addressed the mechanisms underlying their high performance owing to a lack of theoretical explanation of SSMs' learning dynamics. In this study, we provide such an explanation and propose an improved training strategy. The memory capacity of SSMs can be evaluated by examining how input time series are stored in their current state. Such an examination reveals a tradeoff between memory accuracy and length, as well as the theoretical equivalence between the structured state space sequence model (S4) and a simplified S4 with diagonal recurrent weights. This theoretical foundation allows us to elucidate the learning dynamics, proving the importance of initial parameters. Our analytical results suggest that successful learning requires the initial memory structure to be the longest possible even if memory accuracy may deteriorate or the gradient lose the teacher information. Experiments on tasks requiring long memory confirmed that extending memory is difficult, emphasizing the importance of initialization. Furthermore, we found that fixing recurrent weights can be more advantageous than adapting them because it achieves comparable or even higher performance with faster convergence. Our results provide a new theoretical foundation for SSMs and potentially offer a novel optimization strategy.