🤖 AI Summary
This work addresses the issue of "attention sink," a phenomenon in large language models where excessive attention is allocated to the initial token, leading to attention head collapse—characterized by only a few heads remaining active. The study provides the first theoretical and empirical evidence that both Vanilla and Sink Attention mechanisms inherently exhibit a mixture-of-experts (MoE)-like structure. To mitigate head collapse, the authors propose a sink-aware training approach that integrates an auxiliary load-balancing loss with a Gated Attention mechanism. This method effectively promotes more uniform utilization across attention heads and enhances overall model performance. Experiments demonstrate consistent improvements in attention head load balancing and model efficacy across diverse attention architectures.
📝 Abstract
Large Language Models (LLMs) often assign disproportionate attention to the first token, a phenomenon known as the attention sink. Several recent approaches aim to address this issue, including Sink Attention in GPT-OSS and Gated Attention in Qwen3-Next. However, a comprehensive analysis of the relationship among these attention mechanisms is lacking. In this work, we provide both theoretical and empirical evidence demonstrating that the sink in Vanilla Attention and Sink Attention naturally construct a Mixture-of-Experts (MoE) mechanism within attention layers. This insight explains the head collapse phenomenon observed in prior work, where only a fixed subset of attention heads contributes to generation. To mitigate head collapse, we propose a sink-aware training algorithm with an auxiliary load balancing loss designed for attention layers. Extensive experiments show that our method achieves effective head load balancing and improves model performance across Vanilla Attention, Sink Attention, and Gated Attention. We hope this study offers a new perspective on attention mechanisms and encourages further exploration of the inherent MoE structure within attention layers.