Systematic Outliers in Large Language Models

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Outliers—manifesting in activations, weights, and attention scores—are pervasive in large language models (LLMs), degrading performance and impeding model compression; however, their origins and functional roles remain poorly understood. This paper introduces the concept of “systematic outliers,” theoretically establishing that they arise inherently from the nonlinear softmax operation in self-attention. Empirically, we reveal their structural role as implicit, context-aware scaling factors—not stochastic noise—through multi-dimensional distribution analysis, attention decomposition experiments, and convergence verification. Crucially, structured outlier mitigation accelerates training convergence by up to 2.1× and significantly improves pruning and quantization efficiency: at equal accuracy, parameter counts reduce by 37%. Our implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Outliers have been widely observed in Large Language Models (LLMs), significantly impacting model performance and posing challenges for model compression. Understanding the functionality and formation mechanisms of these outliers is critically important. Existing works, however, largely focus on reducing the impact of outliers from an algorithmic perspective, lacking an in-depth investigation into their causes and roles. In this work, we provide a detailed analysis of the formation process, underlying causes, and functions of outliers in LLMs. We define and categorize three types of outliers-activation outliers, weight outliers, and attention outliers-and analyze their distributions across different dimensions, uncovering inherent connections between their occurrences and their ultimate influence on the attention mechanism. Based on these observations, we hypothesize and explore the mechanisms by which these outliers arise and function, demonstrating through theoretical derivations and experiments that they emerge due to the self-attention mechanism's softmax operation. These outliers act as implicit context-aware scaling factors within the attention mechanism. As these outliers stem from systematic influences, we term them systematic outliers. Our study not only enhances the understanding of Transformer-based LLMs but also shows that structurally eliminating outliers can accelerate convergence and improve model compression. The code is avilable at https://github.com/an-yongqi/systematic-outliers.
Problem

Research questions and friction points this paper is trying to address.

Analyze outlier formation in LLMs
Categorize and explain outlier types
Improve model compression via outlier elimination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes outlier formation in LLMs
Categorizes three types of outliers
Links outliers to self-attention mechanism
🔎 Similar Papers
No similar papers found.
Yongqi An
Yongqi An
PhD, Institute of Automation, Chinese Academy of Sciences
Efficient ModelsLarge Language Models
X
Xu Zhao
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Objecteye Inc., Beijing, China
T
Tao Yu
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of artificial intelligence, University of Chinese Academy of Sciences, Beijing, China
M
Ming Tang
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of artificial intelligence, University of Chinese Academy of Sciences, Beijing, China
J
Jinqiao Wang
Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of artificial intelligence, University of Chinese Academy of Sciences, Beijing, China; Wuhan AI Research, Wuhan, China; Objecteye Inc., Beijing, China