Position-Aware Depth Decay Decoding ($D^3$): Boosting Large Language Model Inference Efficiency

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost of large language model (LLM) inference—where efficiency and performance are often at odds—this paper proposes a training-free, dynamic layer-skipping method. The core innovation is a token-position-aware power-law decay strategy for layer retention: the number of retained layers at decoding step $i$ is adaptively determined as $lfloor L imes alpha^i floor$, where $L$ is the total number of layers and $alpha in (0,1)$. This enables zero-shot acceleration of full-parameter LLMs (7B–70B) without architectural or parametric modifications. Crucially, the original model remains entirely intact—no fine-tuning, quantization, or pruning is required. Experiments on the Llama family demonstrate an average 1.5× speedup in inference latency, with less than 1% degradation on both GSM8K and Big-Bench Hard (BBH) benchmarks—effectively preserving model capability while substantially reducing FLOPs.

Technology Category

Application Category

📝 Abstract
Due to the large number of parameters, the inference phase of Large Language Models (LLMs) is resource-intensive. Unlike traditional model compression, which needs retraining, recent dynamic computation methods show that not all components are required for inference, enabling a training-free pipeline. In this paper, we focus on the dynamic depth of LLM generation. A token-position aware layer skipping framework is proposed to save 1.5x times operations efficiently while maintaining performance. We first observed that tokens predicted later have lower perplexity and thus require less computation. Then, we propose a training-free algorithm called Position-Aware Depth Decay Decoding ($D^3$), which leverages a power-law decay function, $leftlfloor L imes (alpha^i) ight floor$, to determine the number of layers to retain when generating token $T_i$. Remarkably, without any retraining, the $D^3$ achieves success across a wide range of generation tasks for the first time. Experiments on large language models (ie the Llama) with $7 sim 70$ billion parameters show that $D^3$ can achieve an average 1.5x speedup compared with the full-inference pipeline while maintaining comparable performance with nearly no performance drop ($<1%$) on the GSM8K and BBH benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Reduces resource-intensive LLM inference without retraining.
Proposes token-position aware layer skipping for efficiency.
Achieves 1.5x speedup with minimal performance drop.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Position-aware layer skipping for efficiency
Training-free dynamic depth adjustment
Power-law decay function optimizes layer retention
🔎 Similar Papers
No similar papers found.
S
Siqi Fan
University of Electronic Science and Technology of China, Chengdu, China
X
Xuezhi Fang
Beijing Academy of Artificial Intelligence, Beijing, China
X
Xingrun Xing
Institute of Computing Automation, Chinese Academy of Sciences, Beijing, China
Peng Han
Peng Han
Professor, Department of Computer Science, UESTC
drug discoveryspatial temporaldata mining
Shuo Shang
Shuo Shang
Computer Science & AI Scientist
Spatial dataSpatiotemporal databases
Y
Yequan Wang
Beijing Academy of Artificial Intelligence, Beijing, China