LoLA: Low-Rank Linear Attention With Sparse Caching

๐Ÿ“… 2025-05-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the low fidelity of linear attention in approximating softmax attention and memory collisions induced by sliding windows in long-sequence Transformer inference, this paper proposes a low-rank linear attention mechanism coupled with a three-tier collaborative memory architecture: (i) a sliding window, (ii) a difficulty-aware sparse global cacheโ€”novelly designed to avoid conflicts by hierarchically indexing keys/values based on memory hardness, and (iii) recursive hidden-state compression. This is the first method to enable difficulty-aware hierarchical storage of key-value pairs while preserving linear-time complexity. It significantly enhances long-range dependency modeling. On the RULER needle-in-a-haystack benchmark, accuracy for 4K-context retrieval improves from 0.6% to 97.4%. Memory overhead is merely 21.7% of that of Llama-3.1 8B, and all experiments are fully reproducible on a single consumer-grade GPU.

Technology Category

Application Category

๐Ÿ“ Abstract
Transformer-based large language models suffer from quadratic complexity at inference on long sequences. Linear attention methods are efficient alternatives, however, they fail to provide an accurate approximation of softmax attention. By additionally incorporating sliding window attention into each linear attention head, this gap can be closed for short context-length tasks. Unfortunately, these approaches cannot recall important information from long contexts due to"memory collisions". In this paper , we propose LoLA: Low-rank Linear Attention with sparse caching. LoLA separately stores additional key-value pairs that would otherwise interfere with past associative memories. Moreover, LoLA further closes the gap between linear attention models and transformers by distributing past key-value pairs into three forms of memory: (i) recent pairs in a local sliding window; (ii) difficult-to-memorize pairs in a sparse, global cache; and (iii) generic pairs in the recurrent hidden state of linear attention. As an inference-only strategy, LoLA enables pass-key retrieval on up to 8K context lengths on needle-in-a-haystack tasks from RULER. It boosts the accuracy of the base subquadratic model from 0.6% to 97.4% at 4K context lengths, with a 4.6x smaller cache than that of Llama-3.1 8B. LoLA demonstrates strong performance on zero-shot commonsense reasoning tasks among 1B and 8B parameter subquadratic models. Finally, LoLA is an extremely lightweight approach: Nearly all of our results can be reproduced on a single consumer GPU.
Problem

Research questions and friction points this paper is trying to address.

Reduces quadratic complexity in transformer inference on long sequences
Improves linear attention accuracy with sparse caching and memory distribution
Enables efficient long-context recall without large memory overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank linear attention with sparse caching
Three memory forms for key-value pairs
Lightweight inference-only strategy
๐Ÿ”Ž Similar Papers
No similar papers found.