Language Modeling With Factorization Memory

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge that RNNs face in jointly achieving high accuracy on short-context tasks and strong generalization on long-context sequences in language modeling, this paper proposes Factorization Memory (FM), a novel recurrent architecture. FM decouples memory states into low-rank factorized representations and incorporates a sparse competitive activation mechanism, enabling constant-time and constant-memory complexity during inference while preserving training parallelism. Unlike existing RNNs or state-space models (e.g., Mamba-2), FM is the first to jointly optimize dense modeling capacity and sparse computational efficiency within a unified framework. Experiments demonstrate that FM matches Transformer performance on short-context benchmarks, while significantly outperforming both Transformers and Mamba-2 on long-context modeling—achieving superior scalability and generalization with fewer parameters and lower inference overhead.

Technology Category

Application Category

📝 Abstract
We propose Factorization Memory, an efficient recurrent neural network (RNN) architecture that achieves performance comparable to Transformer models on short-context language modeling tasks while also demonstrating superior generalization in long-context scenarios. Our model builds upon Mamba-2, enabling Factorization Memory to exploit parallel computations during training while preserving constant computational and memory complexity during inference. To further optimize model efficiency and representational capacity, we develop a sparse formulation of Factorization Memory that updates only a subset of recurrent states at each step while preserving the strong performance of its dense counterpart. To our knowledge, this represents the first RNN architecture that successfully combines sparse memory activation with competitive performance across both short and long-context settings. This work provides a systematic empirical analysis of Factorization Memory in comparison to Transformer and Mamba-2 architectures.
Problem

Research questions and friction points this paper is trying to address.

Efficient RNN architecture for short-context language modeling
Superior generalization in long-context language scenarios
Sparse memory activation maintaining competitive performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Factorization Memory enables efficient recurrent neural network architecture
Sparse formulation updates subset of states for optimized efficiency
Combines sparse memory activation with competitive performance across contexts
🔎 Similar Papers
No similar papers found.
L
Lee Xiong
Rakuten Group, Inc.
M
Maksim Tkachenko
Rakuten Group, Inc.
Johanes Effendi
Johanes Effendi
Rakuten Group, Inc.
Ting Cai
Ting Cai
University of Wisconsin-Madison
Machine LearningData Science