🤖 AI Summary
This work investigates why linear recurrent neural networks (RNNs) are more amenable to parallelization than their traditional nonlinear counterparts and establishes the theoretical foundation for their efficient training in large language models. By leveraging computational complexity theory, arithmetic circuit models, and automata theory, the study presents the first rigorous connection between RNN variants and standard complexity classes such as NC¹, L, and P. It demonstrates that linear RNNs can be modeled as logarithmic-depth arithmetic circuits, achieving parallel efficiency comparable to Transformers, whereas nonlinear RNNs can solve L- or even P-complete problems, rendering them inherently difficult to parallelize efficiently. The analysis further delineates fine-grained differences in expressive power between permutation-diagonal and diagonal-plus-low-rank variants of linear RNNs.
📝 Abstract
The community is increasingly exploring linear RNNs (LRNNs) as language models, motivated by their expressive power and parallelizability. While prior work establishes the expressivity benefits of LRNNs over transformers, it is unclear what makes LRNNs -- but not traditional, nonlinear RNNs -- as easy to parallelize in practice as transformers. We answer this question by providing a tight connection between types of RNNs and standard complexity classes. We show that LRNNs can be viewed as log-depth (bounded fan-in) arithmetic circuits, which represents only a slight depth overhead relative to log-depth boolean circuits that transformers admit. Furthermore, we show that nonlinear RNNs can solve $\mathsf{L}$-complete problems (and even $\mathsf{P}$-complete ones, under polynomial precision), revealing a fundamental barrier to parallelizing them as efficiently as transformers. Our theory also identifies fine-grained expressivity differences between recent popular LRNN variants: permutation-diagonal LRNNs are $\mathsf{NC}^1$-complete whereas diagonal-plus-low-rank LRNNs are more expressive ($\mathsf{PNC}^1$-complete). We provide further insight by associating each type of RNN with a corresponding automata-theoretic model that it can simulate. Together, our results reveal fundamental tradeoffs between nonlinear RNNs and different variants of LRNNs, providing a foundation for designing LLM architectures that achieve an optimal balance between expressivity and parallelism.