Adaptive Layer Selection for Layer-Wise Token Pruning in LLM Inference

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing token pruning methods that employ fixed layer selection suffer significant performance degradation on complex tasks such as key-value (KV) retrieval. To address this limitation, this work proposes Adaptive Selection of Layers (ASL), a training-free approach that dynamically selects the optimal pruning layers during the prefill phase by computing the variance of token ranks derived from attention scores, thereby adhering to a user-specified KV cache budget. ASL introduces, for the first time, an adaptive layer selection mechanism that propagates shallow-layer pruning decisions to deeper layers via a one-shot strategy and seamlessly integrates with decoding-stage optimizations like SnapKV. Experimental results on InfiniteBench, RULER, and NIAH benchmarks demonstrate that ASL consistently outperforms state-of-the-art methods, achieving superior robustness and generalization across diverse tasks while maintaining decoding speed and compression ratios.

Technology Category

Application Category

📝 Abstract
Due to the prevalence of large language models (LLMs), key-value (KV) cache reduction for LLM inference has received remarkable attention. Among numerous works that have been proposed in recent years, layer-wise token pruning approaches, which select a subset of tokens at particular layers to retain in KV cache and prune others, are one of the most popular schemes. They primarily adopt a set of pre-defined layers, at which tokens are selected. Such design is inflexible in the sense that the accuracy significantly varies across tasks and deteriorates in harder tasks such as KV retrieval. In this paper, we propose ASL, a training-free method that adaptively chooses the selection layer for KV cache reduction, exploiting the variance of token ranks ordered by attention score. The proposed method balances the performance across different tasks while meeting the user-specified KV budget requirement. ASL operates during the prefilling stage and can be jointly used with existing KV cache reduction methods such as SnapKV to optimize the decoding stage. By evaluations on the InfiniteBench, RULER, and NIAH benchmarks, we show that equipped with one-shot token selection, where tokens are selected at a layer and propagated to deeper layers, ASL outperforms state-of-the-art layer-wise token selection methods in accuracy while maintaining decoding speed and KV cache reduction.
Problem

Research questions and friction points this paper is trying to address.

layer-wise token pruning
KV cache reduction
large language models
adaptive layer selection
inference efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive layer selection
token pruning
KV cache reduction
training-free
attention score variance
🔎 Similar Papers
No similar papers found.