🤖 AI Summary
In decentralized LLM inference, network latency dominates end-to-end latency, and existing speculative decoding methods are ill-suited due to their centralized assumptions and high communication overhead.
Method: We propose Decentralized Speculative Decoding (DSD), the first speculative decoding framework designed explicitly for decentralized settings. DSD employs lightweight draft models to generate candidate tokens locally, with distributed nodes concurrently verifying candidates via parallel execution. It introduces a semantic-aware adaptive acceptance mechanism that dynamically adjusts acceptance thresholds based on token importance—without requiring model retraining.
Contributions/Results: Theoretically, DSD reduces communication cost by approximately $(N-1)t_1(k-1)/k$, where $N$ is the number of nodes, $t_1$ the per-token verification latency, and $k$ the average speculation length. Empirically, DSD achieves 2.56× and 2.59× end-to-end speedup on HumanEval and GSM8K, respectively, significantly outperforming Eagle3 while preserving full model accuracy.
📝 Abstract
Speculative decoding accelerates large language model (LLM) inference by using a lightweight draft model to propose tokens that are later verified by a stronger target model. While effective in centralized systems, its behavior in decentralized settings, where network latency often dominates compute, remains under-characterized. We present Decentralized Speculative Decoding (DSD), a plug-and-play framework for decentralized inference that turns communication delay into useful computation by verifying multiple candidate tokens in parallel across distributed nodes. We further introduce an adaptive speculative verification strategy that adjusts acceptance thresholds by token-level semantic importance, delivering an additional 15% to 20% end-to-end speedup without retraining. In theory, DSD reduces cross-node communication cost by approximately (N-1)t1(k-1)/k, where t1 is per-link latency and k is the average number of tokens accepted per round. In practice, DSD achieves up to 2.56x speedup on HumanEval and 2.59x on GSM8K, surpassing the Eagle3 baseline while preserving accuracy. These results show that adapting speculative decoding for decentralized execution provides a system-level optimization that converts network stalls into throughput, enabling faster distributed LLM inference with no model retraining or architectural changes.