Implicit Language Models are RNNs: Balancing Parallelization and Expressivity

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the trade-off between parallelizability and modeling capacity in state space models (SSMs) versus Transformers, proposing Implicit SSMs: a novel architecture that employs differentiable fixed-point iterations to realize RNN-style nonlinear state updates. Crucially, it preserves full training parallelism while substantially enhancing representational power. We provide the first theoretical proof that Implicit SSMs are equivalent to universal RNN state transitions. To balance efficiency and accuracy, we introduce a progressive convergence training paradigm, enforcing full convergence only on critical tokens. Empirically, Implicit SSMs significantly outperform both Transformers and explicit SSMs on regular language recognition tasks. Furthermore, we successfully pretrain a 1.3B-parameter model on 207B tokens, achieving consistent gains over same-scale explicit baselines across standard downstream benchmarks.

Technology Category

Application Category

📝 Abstract
State-space models (SSMs) and transformers dominate the language modeling landscape. However, they are constrained to a lower computational complexity than classical recurrent neural networks (RNNs), limiting their expressivity. In contrast, RNNs lack parallelization during training, raising fundamental questions about the trade off between parallelization and expressivity. We propose implicit SSMs, which iterate a transformation until convergence to a fixed point. Theoretically, we show that implicit SSMs implement the non-linear state-transitions of RNNs. Empirically, we find that only approximate fixed-point convergence suffices, enabling the design of a scalable training curriculum that largely retains parallelization, with full convergence required only for a small subset of tokens. Our approach demonstrates superior state-tracking capabilities on regular languages, surpassing transformers and SSMs. We further scale implicit SSMs to natural language reasoning tasks and pretraining of large-scale language models up to 1.3B parameters on 207B tokens - representing, to our knowledge, the largest implicit model trained to date. Notably, our implicit models outperform their explicit counterparts on standard benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Balance parallelization and expressivity in models
Improve training scalability of implicit models
Enhance state-tracking in language modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit SSMs balance parallelization and expressivity
Approximate fixed-point convergence enables scalable training
Implicit models outperform explicit counterparts on benchmarks
🔎 Similar Papers
No similar papers found.