🤖 AI Summary
This work addresses the phenomenon of "attention concentration" in Transformer models, wherein a disproportionate amount of attention is allocated to uninformative or specific tokens, thereby undermining model interpretability, destabilizing training and inference, and exacerbating hallucination issues. The paper presents the first comprehensive survey of this phenomenon, introducing a three-dimensional classification framework—comprising foundational utilization, mechanistic explanation, and mitigation strategies—to systematically organize the evolving research landscape. By synthesizing recent findings on anomalous attention behaviors, the study constructs a structured knowledge base that clarifies core concepts and key challenges. It offers both theoretical insights and practical pathways for understanding and mitigating attention concentration, and further supports community advancement by releasing a curated list of relevant publications.
📝 Abstract
As the foundational architecture of modern machine learning, Transformers have driven remarkable progress across diverse AI domains. Despite their transformative impact, a persistent challenge across various Transformers is Attention Sink (AS), in which a disproportionate amount of attention is focused on a small subset of specific yet uninformative tokens. AS complicates interpretability, significantly affecting the training and inference dynamics, and exacerbates issues such as hallucinations. In recent years, substantial research has been dedicated to understanding and harnessing AS. However, a comprehensive survey that systematically consolidates AS-related research and offers guidance for future advancements remains lacking. To address this gap, we present the first survey on AS, structured around three key dimensions that define the current research landscape: Fundamental Utilization, Mechanistic Interpretation, and Strategic Mitigation. Our work provides a pivotal contribution by clarifying key concepts and guiding researchers through the evolution and trends of the field. We envision this survey as a definitive resource, empowering researchers and practitioners to effectively manage AS within the current Transformer paradigm, while simultaneously inspiring innovative advancements for the next generation of Transformers. The paper list of this work is available at https://github.com/ZunhaiSu/Awesome-Attention-Sink.