🤖 AI Summary
This paper addresses the provably efficient identification of symmetric linear dynamical systems (LDS), proposing the first end-to-end convex optimization framework whose computational complexity is independent of both state dimension and effective memory length. The method models the LDS as an invertible convolution under spectral transformation, enabling exact frequency-domain inversion and minimalistic inference. Leveraging a sequence distillation architecture, it achieves constant-time and constant-space complexity per token while preserving prediction accuracy. Key contributions include: (1) the first theoretically grounded LDS distillation method with formal sample- and computation-complexity guarantees; (2) the first exact, invertible reconstruction of spectral representations for LDS; and (3) significant improvements in long-sequence inference efficiency—demonstrated in language modeling—while maintaining high accuracy, robustness to distribution shift, and scalability to large-scale sequences.
📝 Abstract
We present the first provable method for identifying symmetric linear dynamical systems (LDS) with accuracy guarantees that are independent of the systems' state dimension or effective memory. Our approach builds upon recent work that represents symmetric LDSs as convolutions learnable via fixed spectral transformations. We show how to invert this representation, thereby recovering an LDS model from its spectral transform and yielding an end-to-end convex optimization procedure. This distillation preserves predictive accuracy while enabling constant-time and constant-space inference per token, independent of sequence length. We evaluate our method, SpectraLDS, as a component in sequence prediction architectures and demonstrate that accuracy is preserved while inference efficiency is improved on tasks such as language modeling.