AdaDecode: Accelerating LLM Decoding with Adaptive Layer Parallelism

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency and limited parallelizability of autoregressive decoding in large language models (LLMs) for long-text generation, this paper proposes a layer-wise adaptive skip-layer decoding method that requires no auxiliary model and modifies neither architecture nor parameters. The core innovation lies in dynamically predicting high-confidence tokens from intermediate-layer hidden states and executing subsequent computations in parallel; delayed layer scheduling and consistency verification guarantee output equivalence to standard autoregressive decoding. Unlike speculative decoding, our approach eliminates dependence on a separate drafter model and avoids KV cache misalignment caused by arbitrary layer skipping. Evaluated across diverse generative tasks, the method achieves up to a 1.73× throughput improvement while preserving exact output fidelity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used for long-content generation (e.g., long Chain-of-Thought reasoning) where decoding efficiency becomes a critical bottleneck: Autoregressive decoding is inherently limited by its sequential token generation process, where each token must be generated before the next can be processed. This sequential dependency restricts the ability to fully leverage modern hardware's parallel processing capabilities. Existing methods like speculative decoding and layer skipping offer potential speedups but have notable drawbacks: speculative decoding relies on an auxiliary"drafter"model, which can be challenging to acquire and increases memory overhead, while layer skipping may introduce discrepancies in the outputs due to the missing key-value cache at skipped layers. In this work, we propose AdaDecode, which accelerates LLM decoding without requiring auxiliary models or changes to the original model parameters, while ensuring output consistency. AdaDecode leverages the insight that many tokens can accurately be generated at intermediate layers, as further layers often do not significantly alter predictions once the model reaches a certain confidence. By adaptively generating tokens at intermediate layers when confidence is high, AdaDecode enables the next token's computation to begin immediately. The remaining layer computations for early-predicted tokens are deferred and executed in parallel with subsequent tokens when needed, maximizing hardware utilization and reducing decoding latency. A final verification step ensures that early predictions match the results of standard autoregressive decoding, preserving output parity. Experiments across diverse generation tasks shows that AdaDecode consistently achieves superior decoding throughput with up to 1.73x speedup, while guaranteeing output parity with standard autoregressive decoding.
Problem

Research questions and friction points this paper is trying to address.

Accelerating LLM decoding without auxiliary models
Reducing sequential dependency in autoregressive token generation
Ensuring output consistency while improving hardware utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive layer parallelism for token generation
Confidence-based early token prediction
Parallel deferred layer computation
🔎 Similar Papers
No similar papers found.