SpecExtend: A Drop-in Enhancement for Speculative Decoding of Long Sequences

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation in speculative decoding for long sequences—caused by increased attention overhead and declining draft quality—this paper proposes a training-free, plug-and-play enhancement framework. Methodologically: (1) it introduces the first cross-model KV cache dynamic retrieval mechanism, which adaptively selects context from the draft model using attention scores computed by the target model; (2) it pioneers the unified integration of FlashAttention and hybrid tree attention within a dual-model (draft/target) architecture to enable efficient tree-structured speculative decoding. Evaluated on 16K-token sequences, our approach achieves a 2.22× speedup over standard tree-based speculative decoding, significantly improving inference efficiency for long-text understanding tasks. The core contribution lies in a lightweight, co-optimization paradigm that jointly enhances computational efficiency and draft quality without requiring model retraining.

Technology Category

Application Category

📝 Abstract
Speculative decoding is a widely adopted technique for accelerating inference in large language models (LLMs), but its performance degrades on long inputs due to increased attention cost and reduced draft accuracy. We introduce SpecExtend, a drop-in enhancement that improves the performance of speculative decoding on long sequences without any additional training. SpecExtend integrates efficient attention mechanisms such as FlashAttention and Hybrid Tree Attention into both the draft and target models, reducing latency across all stages. To improve draft accuracy and speed, we propose Cross-model Retrieval, a novel KV cache update strategy that uses the target model's attention scores to dynamically select relevant context for the draft model. Extensive evaluations on three long-context understanding datasets show that SpecExtend accelerates standard tree-based speculative decoding by up to 2.22x for inputs up to 16K tokens, providing an effective solution for speculative decoding of long sequences. The code is available at https://github.com/jycha98/SpecExtend .
Problem

Research questions and friction points this paper is trying to address.

Enhances speculative decoding for long sequences without retraining
Reduces latency via efficient attention mechanisms in draft/target models
Improves draft accuracy with dynamic KV cache selection strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates FlashAttention and Hybrid Tree Attention
Proposes Cross-model Retrieval for KV cache
Accelerates decoding up to 2.22x for 16K tokens
🔎 Similar Papers
No similar papers found.
J
Jungyoub Cha
Seoul National University
Hyunjong Kim
Hyunjong Kim
Seoul National University
Natural Language ProcessingLarge Language ModelsDeep Learning
S
Sungzoon Cho
Seoul National University