🤖 AI Summary
Understanding the fundamental differences among sequence models—softmax, normalized/linear attention, and state-space models (SSMs)—in capturing long-range dependencies and memory capacity remains challenging.
Method: We propose a unified dynamical systems framework that formally integrates these three model classes, enabling spectral analysis of their eigenvalue distributions to characterize frequency-domain behaviors and reveal intrinsic links between spectral properties and task performance. We further introduce a novel, spectrum-driven, task-level interpretability metric that quantifies the mapping from architectural design to spectral response to task-specific requirements.
Contribution/Results: Empirical evaluation across multiple benchmarks demonstrates that eigenvalue spectra effectively encode memory capacity and alignment with task demands, enabling principled architecture optimization and consistent performance gains. This work establishes a rigorous, interpretable, and computationally tractable spectral analysis paradigm for understanding, diagnosing, and designing sequence models.
📝 Abstract
Although softmax attention drives state-of-the-art performance for sequence models, its quadratic complexity limits scalability, motivating linear alternatives such as state space models (SSMs). While these alternatives improve efficiency, their fundamental differences in information processing remain poorly understood. In this work, we leverage the recently proposed dynamical systems framework to represent softmax, norm and linear attention as dynamical systems, enabling a structured comparison with SSMs by analyzing their respective eigenvalue spectra. Since eigenvalues capture essential aspects of dynamical system behavior, we conduct an extensive empirical analysis across diverse sequence models and benchmarks. We first show that eigenvalues influence essential aspects of memory and long-range dependency modeling, revealing spectral signatures that align with task requirements. Building on these insights, we then investigate how architectural modifications in sequence models impact both eigenvalue spectra and task performance. This correspondence further strengthens the position of eigenvalue analysis as a principled metric for interpreting, understanding, and ultimately improving the capabilities of sequence models.