Cascade Speculative Drafting for Even Faster LLM Inference

📅 2023-12-18
🏛️ Neural Information Processing Systems
📈 Citations: 52
Influential: 3
📄 PDF
🤖 AI Summary
To address the inefficiency of autoregressive draft generation and static per-token computational resource allocation in LLM speculative decoding, this paper proposes Cascaded Speculative Drafting (CS Drafting). CS Drafting introduces two novel mechanisms: vertical cascading—eliminating autoregressive dependencies to enable non-autoregressive, parallel draft generation—and horizontal cascading—dynamically allocating variable generation time across tokens. Crucially, it preserves the exact output distribution of the target model without any modification to the model architecture or training procedure. Experimental results demonstrate that CS Drafting achieves up to 81% higher throughput and significantly reduces inference latency compared to standard speculative decoding. The method is model-agnostic, requires no retraining, and is fully open-sourced.
📝 Abstract
Introduced to enhance the efficiency of large language model (LLM) inference, speculative decoding operates by having a smaller model generate a draft. A larger target model then reviews this draft to align with its output, and any acceptance by the target model results in a reduction of the number of the target model runs, ultimately improving efficiency. However, the drafting process in speculative decoding includes slow autoregressive generation and allocates equal time to generating tokens, irrespective of their importance. These inefficiencies collectively contribute to the suboptimal performance of speculative decoding. To further improve LLM inference, we introduce Cascade Speculative Drafting (CS Drafting), a speculative execution algorithm that incorporates two types of cascades. The Vertical Cascade eliminates autoregressive generation from neural models, while the Horizontal Cascade optimizes time allocation in drafting for improved efficiency. Combining both cascades, CS Drafting achieves up to an 81 percent additional speedup over speculative decoding in our experiments, while maintaining the same output distribution as the target model. Our code is publicly available at https://github.com/lfsszd/CS-Drafting.
Problem

Research questions and friction points this paper is trying to address.

Improves efficiency of large language model inference
Reduces slow autoregressive generation in drafting
Optimizes time allocation for token generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vertical Cascade eliminates autoregressive generation
Horizontal Cascade optimizes time allocation
CS Drafting combines cascades for speedup
🔎 Similar Papers
No similar papers found.
Z
Ziyi Chen
Department of Computer Science, University of Illinois at Urbana-Champaign
X
Xiaocong Yang
Department of Computer Science, University of Illinois at Urbana-Champaign
Jiacheng Lin
Jiacheng Lin
University of Illinois Urbana-Champaign
Machine LearningFoundation ModelsHealthcareRecommendation System
Chenkai Sun
Chenkai Sun
University of Illinois at Urbana-Champaign
Natural Language ProcessingDeep LearningArtificial Intelligence
Kevin Chen-Chuan Chang
Kevin Chen-Chuan Chang
University of Illinois at Urbana-Champaign
Data ManagementData IntegrationDatabasesData Mining
J
Jie Huang
Department of Computer Science, University of Illinois at Urbana-Champaign