Attention Sink Forges Native MoE in Attention Layers: Sink-Aware Training to Address Head Collapse

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of "attention sink," a phenomenon in large language models where excessive attention is allocated to the initial token, leading to attention head collapse—characterized by only a few heads remaining active. The study provides the first theoretical and empirical evidence that both Vanilla and Sink Attention mechanisms inherently exhibit a mixture-of-experts (MoE)-like structure. To mitigate head collapse, the authors propose a sink-aware training approach that integrates an auxiliary load-balancing loss with a Gated Attention mechanism. This method effectively promotes more uniform utilization across attention heads and enhances overall model performance. Experiments demonstrate consistent improvements in attention head load balancing and model efficacy across diverse attention architectures.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often assign disproportionate attention to the first token, a phenomenon known as the attention sink. Several recent approaches aim to address this issue, including Sink Attention in GPT-OSS and Gated Attention in Qwen3-Next. However, a comprehensive analysis of the relationship among these attention mechanisms is lacking. In this work, we provide both theoretical and empirical evidence demonstrating that the sink in Vanilla Attention and Sink Attention naturally construct a Mixture-of-Experts (MoE) mechanism within attention layers. This insight explains the head collapse phenomenon observed in prior work, where only a fixed subset of attention heads contributes to generation. To mitigate head collapse, we propose a sink-aware training algorithm with an auxiliary load balancing loss designed for attention layers. Extensive experiments show that our method achieves effective head load balancing and improves model performance across Vanilla Attention, Sink Attention, and Gated Attention. We hope this study offers a new perspective on attention mechanisms and encourages further exploration of the inherent MoE structure within attention layers.
Problem

Research questions and friction points this paper is trying to address.

attention sink
head collapse
Mixture-of-Experts
attention mechanisms
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

attention sink
Mixture-of-Experts
head collapse
sink-aware training
load balancing
🔎 Similar Papers
No similar papers found.
Z
Zizhuo Fu
Institute for Artificial Intelligence, Peking University, Beijing; School of Integrated Circuits, Peking University, Beijing
Wenxuan Zeng
Wenxuan Zeng
Peking University
Efficient Deep LearningLarge Language Model
Runsheng Wang
Runsheng Wang
Peking University
Meng Li
Meng Li
Peking University; Ex-Facebook
Efficient AIPrivacy Preserving AI