🤖 AI Summary
To address the challenge that RNNs face in jointly achieving high accuracy on short-context tasks and strong generalization on long-context sequences in language modeling, this paper proposes Factorization Memory (FM), a novel recurrent architecture. FM decouples memory states into low-rank factorized representations and incorporates a sparse competitive activation mechanism, enabling constant-time and constant-memory complexity during inference while preserving training parallelism. Unlike existing RNNs or state-space models (e.g., Mamba-2), FM is the first to jointly optimize dense modeling capacity and sparse computational efficiency within a unified framework. Experiments demonstrate that FM matches Transformer performance on short-context benchmarks, while significantly outperforming both Transformers and Mamba-2 on long-context modeling—achieving superior scalability and generalization with fewer parameters and lower inference overhead.
📝 Abstract
We propose Factorization Memory, an efficient recurrent neural network (RNN) architecture that achieves performance comparable to Transformer models on short-context language modeling tasks while also demonstrating superior generalization in long-context scenarios. Our model builds upon Mamba-2, enabling Factorization Memory to exploit parallel computations during training while preserving constant computational and memory complexity during inference. To further optimize model efficiency and representational capacity, we develop a sparse formulation of Factorization Memory that updates only a subset of recurrent states at each step while preserving the strong performance of its dense counterpart. To our knowledge, this represents the first RNN architecture that successfully combines sparse memory activation with competitive performance across both short and long-context settings. This work provides a systematic empirical analysis of Factorization Memory in comparison to Transformer and Mamba-2 architectures.