OrthoRank: Token Selection via Sink Token Orthogonality for Efficient LLM inference

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large language models, the attention mechanism suffers from the “sink token” phenomenon—semantically impoverished tokens receiving disproportionately high attention weights. Method: This work reveals a layer-wise evolutionary pattern of sink tokens from the perspective of hidden-state similarity: non-sink tokens progressively converge toward sink tokens in deeper layers. Building on this insight, we propose OrthoRank, a dynamic token selection method grounded in orthogonality—quantifying token importance via cosine similarity between normalized hidden states and sink tokens, weighted by layer depth. Contribution/Results: OrthoRank is the first to incorporate orthogonality into sink-token-aware sparsification. At identical sparsity levels, it significantly outperforms layer pruning: achieving lower perplexity, higher zero-shot accuracy, and state-of-the-art performance on LongBench, while maintaining high throughput.

Technology Category

Application Category

📝 Abstract
Attention mechanisms are central to the success of large language models (LLMs), enabling them to capture intricate token dependencies and implicitly assign importance to each token. Recent studies have revealed the sink token, which receives disproportionately high attention despite their limited semantic role. In this paper, we first expand the relationship between the sink token and other tokens, moving beyond attention to explore their similarity in hidden states, considering the layer depth. We observe that as the layers get deeper, the cosine similarity between the normalized hidden states of the sink token and those of other tokens increases, and that the normalized hidden states of the sink token exhibit negligible changes. These imply that other tokens consistently are directed toward the sink token throughout the layers. Next, we propose a dynamic token selection method, called OrthoRank, using these findings to select important tokens. Specifically, in a certain layer, we define token importance by the speed at which the token moves toward the sink token. This is converted into orthogonality with the sink token, meaning that tokens that are more orthogonal to the sink token are assigned greater importance. Finally, through extensive experiments, we demonstrated that our method results in lower perplexity and higher zero-shot accuracy compared to layer pruning methods at the same sparsity ratio with comparable throughput, while also achieving superior performance on LongBench.
Problem

Research questions and friction points this paper is trying to address.

Analyzes sink token behavior in LLM attention layers
Proposes OrthoRank for dynamic token selection via orthogonality
Improves perplexity and accuracy at high sparsity ratios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic token selection via orthogonality
Utilizes sink token similarity in hidden states
Improves perplexity and zero-shot accuracy
🔎 Similar Papers
No similar papers found.
S
Seungjun Shin
Samsung Advanced Institute of Technology, Korea
Jaehoon Oh
Jaehoon Oh
Samsung Advanced Institute of Technology
deep learning
D
Dokwan Oh
Samsung Advanced Institute of Technology, Korea