Garbage Attention in Large Language Models: BOS Sink Heads and Sink-aware Pruning

πŸ“… 2026-01-11
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the significant structural redundancy observed in deep layers of large language models, which has lacked a functional explanation. We uncover, for the first time, the β€œBOS sink” phenomenon: attention heads with high BOS sink scores in deeper layers merely act as repositories for superfluous attention weights, exhibiting functional redundancy. Building on this insight, we propose a sink-aware pruning strategy that leverages BOS sink scores to identify and remove redundant attention heads, replacing conventional approaches based on weight or activation magnitudes. Experiments on Gemma-3, Llama-3.1, and Qwen3 demonstrate that our method more accurately locates redundant components, preserving performance close to that of dense models even under aggressive pruning, while maintaining robustness across varying sequence lengths.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) are known to contain significant redundancy, yet a systematic explanation for why certain components, particularly in higher layers, are more redundant has remained elusive. In this work, we identify the BOS sink phenomenon as a key mechanism driving this layer-wise sensitivity. We show that attention heads with high BOS sink scores are strongly associated with functional redundancy: such heads, especially in deeper layers, contribute little to predictive performance and effectively serve as \emph{dumping grounds} for superfluous attention weights. This provides a concrete functional explanation for the structural redundancy reported in prior studies. Leveraging this insight, we introduce a simple pruning strategy that removes high-BOS sink heads. Experiments on Gemma-3, Llama-3.1, and Qwen3 demonstrate that this approach identifies redundant transformer components more reliably than weight- or activation-based criteria, while preserving performance close to dense baselines even under aggressive pruning. Moreover, we find that the behavior of sink heads remains stable across different sequence lengths. Overall, our results suggest that structural properties of attention offer a more intuitive and robust basis for model compression than magnitude-based methods.
Problem

Research questions and friction points this paper is trying to address.

redundancy
large language models
attention heads
BOS sink
model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

BOS sink
attention redundancy
structured pruning
model compression
transformer heads
πŸ”Ž Similar Papers
No similar papers found.
J
Jaewon Sok
Department of Rural Systems Engineering, Seoul National University
J
J. Yeom
Graduate School of Data Science, Seoul National University
S
Seonghyeon Park
Department of Aerospace Engineering, Seoul National University
J
Jeongjae Park
Graduate School of Data Science, Seoul National University
Taesup Kim
Taesup Kim
Assistant Professor, Seoul National University
Representation LearningTransfer LearningAIMachine LearningDeep Learning