π€ AI Summary
This work proposes Exclusive Self-Attention (XSA), a novel variant of self-attention that mitigates the tendency of conventional self-attention mechanisms to overemphasize positional information of the current token, thereby impairing contextual modeling. XSA introduces, for the first time, an orthogonality constraint on value vectors within the self-attention computation, compelling the model to attend exclusively to context representations orthogonal to the current token. This design effectively suppresses interference from the tokenβs own positional embedding. Implemented within a standard Transformer architecture, XSA consistently outperforms conventional self-attention in language modeling benchmarks, with performance gains becoming increasingly pronounced as sequence length grows. Notably, the advantage persists even in large-scale models, yielding measurable improvements in a 2.7B-parameter setting.
π Abstract
We introduce exclusive self attention (XSA), a simple modification of self attention (SA) that improves Transformer's sequence modeling performance. The key idea is to constrain attention to capture only information orthogonal to the token's own value vector (thus excluding information of self position), encouraging better context modeling. Evaluated on the standard language modeling task, XSA consistently outperforms SA across model sizes up to 2.7B parameters and shows increasingly larger gains as sequence length grows.