🤖 AI Summary
To address the latency increase and the trade-off between acceptance rate and inference speed caused by scaling up draft models in speculative decoding, this paper proposes a bidirectional speculative verification pipeline. It establishes a parallel heterogeneous execution framework where the target and draft models serve as mutual speculative paths, enabling cross-GPU/NPU collaborative computation, branch rollout triggered by early-exit mechanisms, and multi-token speculative streaming output. This design overcomes the serial bottleneck inherent in conventional autoregressive draft generation. Evaluated on 14B–66B language models, it achieves 2.8×–5.8× end-to-end speedup, outperforming EAGLE3 by 30% on average. The core contribution lies in the first realization of synergistic optimization between bidirectional speculation and hardware-aware multi-token streaming scheduling, significantly alleviating the latency–acceptance-rate trade-off.
📝 Abstract
Speculative decoding accelerates LLM inference by using a draft model to look ahead, but gains are capped by the cost of autoregressive draft generation: increasing draft size elevates acceptance rates but introduces additional latency overhead exacerbating the speed-accuracy tradeoff. Prior methods (Medusa, Hydra, EAGLE) partially reduce draft cost but either degrade acceptance or introduce overheads that limit scaling. We present Mirror Speculative Decoding (Mirror-SD), an inference algorithm that breaks the latency-acceptance tradeoff. Mirror-SD launches branch-complete rollouts from early-exit signals in parallel with the target model's suffix and explicitly maps computation across heterogeneous accelerators (GPU and NPU) to exploit cross-device parallelism. The draft speculates forward continuations for the target to verify, while the target simultaneously speculates correction paths for the draft, converting speculation into two complementary execution pipelines. To further cut draft latency without weakening acceptance semantics, we add speculative streaming so the draft emits multiple tokens per step. This dual strategy of parallel heterogeneous execution plus multi-token speculative streaming pushes speculative decoding toward its ideal regime of high acceptance with low overhead. On SpecBench with server-scale models from 14B to 66B parameters, Mirror-SD delivers consistent end-to-end gains, achieving 2.8x-5.8x wall-time speedups across diverse tasks and a 30% average relative improvement over the strongest baseline, EAGLE3.