Demystifying Singular Defects in Large Language Models

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poorly understood origins of high-norm tokens in large language models (LLMs), for which existing analytical frameworks—e.g., those developed for Vision Transformers—are inapplicable. We propose the first systematic singular defect analysis paradigm tailored to LLMs. Our method introduces a layer-wise singular direction prediction mechanism, revealing that inter-layer singular directions govern token norm surges, while negative eigenvalues drive abrupt norm decay; it further distinguishes heterogeneous computational pathways between initial and non-initial tokens. Additionally, we design a right-singular-vector triggering mechanism and integrate linear approximation, singular value decomposition (SVD), and spectral analysis into a unified modeling framework. Empirical validation across diverse LLMs (Llama, Qwen, Phi) confirms broad applicability: under INT4 quantization, perplexity (PPL) improves by 12%, and lightweight fingerprint signatures (<1 KB) are enabled. The implementation is open-sourced.

Technology Category

Application Category

📝 Abstract
Large transformer models are known to produce high-norm tokens. In vision transformers (ViTs), such tokens have been mathematically modeled through the singular vectors of the linear approximations of layers. However, in large language models (LLMs), the underlying causes of high-norm tokens remain largely unexplored, and their different properties from those of ViTs require a new analysis framework. In this paper, we provide both theoretical insights and empirical validation across a range of recent models, leading to the following observations: i) The layer-wise singular direction predicts the abrupt explosion of token norms in LLMs. ii) The negative eigenvalues of a layer explain its sudden decay. iii) The computational pathways leading to high-norm tokens differ between initial and noninitial tokens. iv) High-norm tokens are triggered by the right leading singular vector of the matrix approximating the corresponding modules. We showcase two practical applications of these findings: the improvement of quantization schemes and the design of LLM signatures. Our findings not only advance the understanding of singular defects in LLMs but also open new avenues for their application. We expect that this work will stimulate further research into the internal mechanisms of LLMs and will therefore publicly release our code.
Problem

Research questions and friction points this paper is trying to address.

Identify causes of high-norm tokens in LLMs
Develop analysis framework for LLM token properties
Apply findings to improve quantization and signature design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise singular direction predicts norms
Negative eigenvalues explain layer decay
High-norm tokens differ by computational pathways
🔎 Similar Papers
No similar papers found.
Haoqi Wang
Haoqi Wang
PhD, EPFL
computer visionartificial intelligence
T
Tong Zhang
School of Computer and Communication Sciences, EPFL, Switzerland
Mathieu Salzmann
Mathieu Salzmann
EPFL
Computer visionmachine learning