TRA: Better Length Generalisation with Threshold Relative Attention

📅 2025-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transformers suffer severe performance degradation during sequence length extrapolation, primarily due to two inherent flaws in self-attention: residual irrelevant information and position bias that amplifies with distance. This paper proposes Thresholded Relative Attention (TRA), the first method to jointly integrate *dynamic sparse key pruning* and *contextualized relative distance modeling*. TRA employs input-adaptive thresholds to filter out irrelevant keys; only retained keys participate in relative positional encoding and softmax normalization—whose support is accordingly redefined. Implemented within a decoder-only architecture, TRA introduces no additional parameters. On canonical length-extrapolation benchmarks—including copy, induction head, and algorithmic tasks—TRA extends the maximum extrapolatable sequence length by over 3× and reduces generalization error by more than 60%, substantially outperforming standard Transformers.

Technology Category

Application Category

📝 Abstract
Transformers struggle with length generalisation, displaying poor performance even on basic tasks. We test whether these limitations can be explained through two key failures of the self-attention mechanism. The first is the inability to fully remove irrelevant information. The second is tied to position, even if the dot product between a key and query is highly negative (i.e. an irrelevant key) learned positional biases may unintentionally up-weight such information - dangerous when distances become out of distribution. Put together, these two failure cases lead to compounding generalisation difficulties. We test whether they can be mitigated through the combination of a) selective sparsity - completely removing irrelevant keys from the attention softmax and b) contextualised relative distance - distance is only considered as between the query and the keys that matter. We show how refactoring the attention mechanism with these two mitigations in place can substantially improve generalisation capabilities of decoder only transformers.
Problem

Research questions and friction points this paper is trying to address.

Transformers struggle with length generalization in tasks
Self-attention fails to filter irrelevant information effectively
Positional biases incorrectly weight out-of-distribution distances
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective sparsity removes irrelevant keys
Contextualized relative distance focuses queries
Threshold relative attention improves generalization
🔎 Similar Papers
No similar papers found.