RAT: Bridging RNN Efficiency and Attention Accuracy in Language Modeling

πŸ“… 2025-07-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Softmax-based attention in Transformers incurs prohibitive computational overhead for long-context sequences. To address this, we propose the RNN-Attention Transformer (RAT), a novel architecture that synergistically integrates recurrent modeling with attention: it employs chunked linear recurrences to capture local dependencies and complements them with inter-chunk softmax attention for long-range interactions, supporting tunable chunk sizes to balance efficiency and accuracy. RAT introduces an intermediate structural paradigm between RNNs and standard attention, and designs a local-attention hybrid variant to optimize KV cache utilization. Evaluated on a 1.3B-parameter model, RAT achieves 7Γ— faster training and 9Γ— faster inference compared to baseline Transformers. On downstream tasks, it improves commonsense reasoning accuracy by up to 4.0%, code generation execution rate by 1.0%, and summarization Rouge-L scoreβ€”while matching or exceeding baseline accuracy across all evaluated benchmarks.

Technology Category

Application Category

πŸ“ Abstract
Transformers have become the cornerstone of modern large-scale language models; however, their dependence on softmax attention poses a major computational bottleneck, particularly in long-context settings. In this work, rather than following prevalent approaches such as linear attention (or SSMs) and local attention, we introduce an intermediate design called at between recurrence and attention mechanisms. It partitions the input into chunks, applies a simple linear recurrence within each chunk to capture local dependencies, and then performs softmax attention across chunks to model long-range interactions. By adjusting the size of the chunk, at enables flexible trade-offs, combining the strengths of RNN and attention. Empirically, with a chunk size of 16, the at layer achieves a (7 imes) improvement in training speed with 100K token sequences and (9 imes) in generation at 4K sequence length, while maintaining similar or sometimes even better accuracy compared to standard attention. We demonstrate this by training 1.3B parameter models from scratch and performing large-scale evaluations, including short- and long-context benchmarks, as well as supervised fine-tuning~(SFT). We further propose a hybrid architecture that interleaves at with local attention. By combining efficient long-range modeling with strong local interactions, this hybrid design not only improves inference speed and reduces cache memory usage compared to attention, but also consistently enhances performance, for example, achieving an average 1 point gain in commonsense reasoning tasks, up to 4 points on code tasks, and a 1 point Rouge-L increase in a summarization SFT task. Code is available at https://github.com/CLAIRE-Labo/RAT
Problem

Research questions and friction points this paper is trying to address.

Bridges RNN efficiency and attention accuracy in language modeling
Addresses computational bottleneck of softmax attention in long contexts
Enables flexible trade-offs between RNN and attention via chunking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chunk-based linear recurrence for local dependencies
Softmax attention across chunks for long-range interactions
Hybrid architecture combining RAT with local attention
πŸ”Ž Similar Papers
No similar papers found.