StateX: Enhancing RNN Recall via Post-training State Expansion

πŸ“… 2025-09-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Recurrent Neural Networks (RNNs) suffer from fixed-capacity recurrent states, limiting their ability to model long-range context efficiently; increasing state dimensionality incurs prohibitive training costs. To address this, we propose StateXβ€”the first post-training state expansion method for pretrained RNNs, including linear attention and state space models (SSMs). StateX dynamically enlarges the hidden state dimension via lightweight architectural modifications, requiring no retraining and introducing zero additional parameters. Evaluated on models up to 1.3B parameters, StateX significantly enhances long-range recall and in-context learning while preserving original model performance exactly. Its post-training overhead is minimal, and it maintains full compatibility with existing inference pipelines. StateX is the first approach to enable efficient, lossless, and scalable state enhancement across mainstream RNN variants, offering a practical pathway toward long-context RNNs.

Technology Category

Application Category

πŸ“ Abstract
While Transformer-based models have demonstrated remarkable language modeling performance, their high complexities result in high costs when processing long contexts. In contrast, recurrent neural networks (RNNs) such as linear attention and state space models have gained popularity due to their constant per-token complexities. However, these recurrent models struggle with tasks that require accurate recall of contextual information from long contexts, because all contextual information is compressed into a constant-size recurrent state. Previous works have shown that recall ability is positively correlated with the recurrent state size, yet directly training RNNs with larger recurrent states results in high training costs. In this paper, we introduce StateX, a training pipeline for efficiently expanding the states of pre-trained RNNs through post-training. For two popular classes of RNNs, linear attention and state space models, we design post-training architectural modifications to scale up the state size with no or negligible increase in model parameters. Experiments on models up to 1.3B parameters demonstrate that StateX efficiently enhances the recall and in-context learning ability of RNNs without incurring high post-training costs or compromising other capabilities.
Problem

Research questions and friction points this paper is trying to address.

Enhancing RNN recall ability from long contexts
Reducing training costs for larger recurrent states
Improving in-context learning without parameter increase
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-training state expansion for RNNs
Architectural modifications to scale state size
Enhancing recall without increasing parameters
πŸ”Ž Similar Papers
No similar papers found.
X
Xingyu Shen
Department of Science and Technology, Tsinghua University, Beijing, China
Yingfa Chen
Yingfa Chen
PhD at Tsinghua University
machine learninglong-context modelinglanguage modeling
Zhen Leng Thai
Zhen Leng Thai
Tsinghua University
NLPModel ArchitecturesData Engineering
X
Xu Han
Department of Science and Technology, Tsinghua University, Beijing, China
Z
Zhiyuan Liu
Department of Science and Technology, Tsinghua University, Beijing, China
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing