π€ AI Summary
This work proposes Krause Attention, a novel attention mechanism that addresses representation collapse and attention sinkβcommon issues in traditional Transformers caused by global normalization in self-attention. By introducing bounded-confidence consensus dynamics into attention computation for the first time, Krause Attention replaces global similarity aggregation with distance-based local sparse interactions, thereby promoting structured local synchronization rather than global mixing. This design effectively mitigates excessive attention concentration and reduces computational complexity from quadratic to linear. Extensive experiments demonstrate consistent performance improvements across diverse benchmarks, including Vision Transformers on CIFAR and ImageNet, autoregressive generation on MNIST and CIFAR-10, and large language models such as Llama and Qwen, all while significantly lowering computational overhead.
π Abstract
Self-attention in Transformers relies on globally normalized softmax weights, causing all tokens to compete for influence at every layer. When composed across depth, this interaction pattern induces strong synchronization dynamics that favor convergence toward a dominant mode, a behavior associated with representation collapse and attention sink phenomena. We introduce Krause Attention, a principled attention mechanism inspired by bounded-confidence consensus dynamics. Krause Attention replaces similarity-based global aggregation with distance-based, localized, and selectively sparse interactions, promoting structured local synchronization instead of global mixing. We relate this behavior to recent theory modeling Transformer dynamics as interacting particle systems, and show how bounded-confidence interactions naturally moderate attention concentration and alleviate attention sinks. Restricting interactions to local neighborhoods also reduces runtime complexity from quadratic to linear in sequence length. Experiments across vision (ViT on CIFAR/ImageNet), autoregressive generation (MNIST/CIFAR-10), and large language models (Llama/Qwen) demonstrate consistent gains with substantially reduced computation, highlighting bounded-confidence dynamics as a scalable and effective inductive bias for attention.