Why Depth Matters in Parallelizable Sequence Models: A Lie Algebraic View

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the error bounds of parallelized sequential models under limited expressivity and characterizes how these errors scale with model depth. By introducing Lie algebraic control theory, the work establishes—for the first time—a correspondence between model depth and a tower of Lie algebra extensions, rigorously delineating the expressive limits of constant-depth architectures. It further proves that approximation error decays exponentially with increasing depth. The analysis innovatively applies Lie algebraic systems to parallel sequence modeling, revealing depth as a critical factor governing expressive power. Theoretical findings are numerically validated on symbolic token prediction and continuous state tracking tasks, confirming the tightness and accuracy of the derived error bounds.

Technology Category

Application Category

📝 Abstract
Scalable sequence models, such as Transformer variants and structured state-space models, often trade expressivity power for sequence-level parallelism, which enables efficient training. Here we examine the bounds on error and how error scales when models operate outside of their expressivity regimes using a Lie-algebraic control perspective. Our theory formulates a correspondence between the depth of a sequence model and the tower of Lie algebra extensions. Echoing recent theoretical studies, we characterize the Lie-algebraic class of constant-depth sequence models and their corresponding expressivity bounds. Furthermore, we analytically derive an approximation error bound and show that error diminishes exponentially as the depth increases, consistent with the strong empirical performance of these models. We validate our theoretical predictions using experiments on symbolic word and continuous-valued state-tracking problems.
Problem

Research questions and friction points this paper is trying to address.

sequence models
expressivity
depth
Lie algebra
approximation error
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lie algebra
sequence modeling
model depth
expressivity
approximation error
🔎 Similar Papers
No similar papers found.
G
Gyuryang Heo
Howard Hughes Medical Institute, Department of Neurobiology, Harvard Medical School, Boston, MA, USA; Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA, USA
T
Timothy Ngotiaoco
Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA, USA
Kazuki Irie
Kazuki Irie
Harvard University
computer scienceartificial intelligencecognitive scienceneural networkscomparative literature
S
Samuel J. Gershman
Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA, USA; Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA, USA
B
Bernardo Sabatini
Howard Hughes Medical Institute, Department of Neurobiology, Harvard Medical School, Boston, MA, USA; Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA, USA