SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting

📅 2025-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational and memory overheads and strong hardware dependency in large language model (LLM) inference, this paper proposes a speculative early-exit acceleration method that requires no parameter modification. The approach features a three-tier collaborative mechanism: (1) a probability-aware lightweight predictor for low-overhead exit decisions; (2) a two-level heuristic prediction scheduler integrating distributional and contextual similarity; and (3) a context-aware fusion mapping framework supporting diverse decoding strategies. Orthogonally integrated with GPU-parallel speculation, dynamic layer scheduling, quantization compatibility, and sparse activation, the method achieves 2.25× and 2.43× inference speedup on Llama2-7B in cloud and PC settings, respectively, with negligible accuracy degradation, minimal training overhead, and zero parameter alteration.

Technology Category

Application Category

📝 Abstract
Early exiting has recently emerged as a promising technique for accelerating large language models (LLMs) by effectively reducing the hardware computation and memory access. In this paper, we present SpecEE, a fast LLM inference engine with speculative early exiting. (1) At the algorithm level, we propose the speculation-based lightweight predictor design by exploiting the probabilistic correlation between the speculative tokens and the correct results and high parallelism of GPUs. (2) At the system level, we point out that not all layers need a predictor and design the two-level heuristic predictor scheduling engine based on skewed distribution and contextual similarity. (3) At the mapping level, we point out that different decoding methods share the same essential characteristics, and propose the context-aware merged mapping for predictor with efficient GPU implementations to support speculative decoding, and form a framework for various existing orthogonal acceleration techniques (e.g., quantization and sparse activation) on cloud and personal computer (PC) scenarios, successfully pushing the Pareto frontier of accuracy and speedup. It is worth noting that SpecEE can be applied to any LLM by negligible training overhead in advance without affecting the model original parameters. Extensive experiments show that SpecEE achieves 2.25x and 2.43x speedup with Llama2-7B on cloud and PC scenarios respectively.
Problem

Research questions and friction points this paper is trying to address.

Accelerating LLM inference via speculative early exiting
Designing lightweight predictors using GPU parallelism
Optimizing predictor scheduling for skewed token distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative lightweight predictor for GPU parallelism
Two-level heuristic predictor scheduling engine
Context-aware merged mapping for efficient decoding
🔎 Similar Papers
No similar papers found.
J
Jiaming Xu
Shanghai Jiao Tong University
J
Jiayi Pan
Shanghai Jiao Tong University
Y
Yongkang Zhou
Shanghai Jiao Tong University
S
Siming Chen
Shanghai Jiao Tong University
J
Jinhao Li
Shanghai Jiao Tong University
Yaoxiu Lian
Yaoxiu Lian
Shanghai Jong Tong university
J
Junyi Wu
Shanghai Jiao Tong University
Guohao Dai
Guohao Dai
Associate Professor of Shanghai Jiao Tong University
Sparse ComputationLarge-scale Graph ProcessingFPGACircuits and Systems