Exclusive Self Attention

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes Exclusive Self-Attention (XSA), a novel variant of self-attention that mitigates the tendency of conventional self-attention mechanisms to overemphasize positional information of the current token, thereby impairing contextual modeling. XSA introduces, for the first time, an orthogonality constraint on value vectors within the self-attention computation, compelling the model to attend exclusively to context representations orthogonal to the current token. This design effectively suppresses interference from the token’s own positional embedding. Implemented within a standard Transformer architecture, XSA consistently outperforms conventional self-attention in language modeling benchmarks, with performance gains becoming increasingly pronounced as sequence length grows. Notably, the advantage persists even in large-scale models, yielding measurable improvements in a 2.7B-parameter setting.

Technology Category

Application Category

πŸ“ Abstract
We introduce exclusive self attention (XSA), a simple modification of self attention (SA) that improves Transformer's sequence modeling performance. The key idea is to constrain attention to capture only information orthogonal to the token's own value vector (thus excluding information of self position), encouraging better context modeling. Evaluated on the standard language modeling task, XSA consistently outperforms SA across model sizes up to 2.7B parameters and shows increasingly larger gains as sequence length grows.
Problem

Research questions and friction points this paper is trying to address.

self attention
sequence modeling
context modeling
Transformer
orthogonal information
Innovation

Methods, ideas, or system contributions that make the work stand out.

exclusive self attention
orthogonal attention
transformer
sequence modeling
language modeling
πŸ”Ž Similar Papers
No similar papers found.