Accelerating Large Language Model Inference via Early-Exiting Algorithms

📅 2025-09-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address system-level bottlenecks—particularly reduced batched inference throughput—caused by early-exit methods in large language model (LLM) deployment, this paper proposes an algorithm-architecture co-optimization framework. Our method introduces: (1) a parallel decoding mechanism that eliminates sequential layer-wise exit dependencies; (2) a depth-parameter sharing architecture to mitigate synchronization overhead across multiple exit branches; and (3) a lightweight pre-trained router that jointly optimizes computational depth allocation and parameter efficiency. Crucially, the approach preserves model capability while significantly reducing inference latency and FLOPs. It establishes a new Pareto frontier between efficiency and accuracy: on mainstream LLMs, batched throughput improves by up to 2.3×, and average inference cost decreases by 41%. The framework delivers a scalable, system-aware solution for adaptive LLM inference.

Technology Category

Application Category

📝 Abstract
Large language models have achieved remarkable capabilities, but their practical deployment is hindered by significant computational costs. While adaptive computation methods like early-exiting promise to reduce these costs, they introduce a fundamental conflict: the per-token dynamism intended to save computation often creates system-level bottlenecks that can paradoxically reduce throughput in batched inference. This dissertation resolves this conflict by co-designing adaptive algorithms and model architectures to strike an optimal balance between dynamism and efficiency. To this end, our work first addresses critical sources of overhead in conventional early-exiting by proposing an efficient parallel decoding mechanism. We then show that deep parameter sharing provides an architectural foundation that not only yields compact, parameter-efficient models but also inherently mitigates the critical synchronization issues affecting dynamic inference. Finally, this work presents a unified framework where lightweight routers are pretrained to dynamically assign an optimal recursion depth for each token. This approach establishes a new Pareto frontier between efficiency and performance by effectively optimizing for both adaptive computation and parameter efficiency within a single model.
Problem

Research questions and friction points this paper is trying to address.

Resolving computational bottlenecks in early-exiting LLM inference
Balancing dynamic computation with system efficiency in batching
Optimizing token-level recursion depth via lightweight routing systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel decoding mechanism for early-exiting overhead
Deep parameter sharing architecture for synchronization mitigation
Lightweight pretrained routers for dynamic depth assignment
🔎 Similar Papers
No similar papers found.