🤖 AI Summary
This study investigates the error bounds of parallelized sequential models under limited expressivity and characterizes how these errors scale with model depth. By introducing Lie algebraic control theory, the work establishes—for the first time—a correspondence between model depth and a tower of Lie algebra extensions, rigorously delineating the expressive limits of constant-depth architectures. It further proves that approximation error decays exponentially with increasing depth. The analysis innovatively applies Lie algebraic systems to parallel sequence modeling, revealing depth as a critical factor governing expressive power. Theoretical findings are numerically validated on symbolic token prediction and continuous state tracking tasks, confirming the tightness and accuracy of the derived error bounds.
📝 Abstract
Scalable sequence models, such as Transformer variants and structured state-space models, often trade expressivity power for sequence-level parallelism, which enables efficient training. Here we examine the bounds on error and how error scales when models operate outside of their expressivity regimes using a Lie-algebraic control perspective. Our theory formulates a correspondence between the depth of a sequence model and the tower of Lie algebra extensions. Echoing recent theoretical studies, we characterize the Lie-algebraic class of constant-depth sequence models and their corresponding expressivity bounds. Furthermore, we analytically derive an approximation error bound and show that error diminishes exponentially as the depth increases, consistent with the strong empirical performance of these models. We validate our theoretical predictions using experiments on symbolic word and continuous-valued state-tracking problems.