๐ค AI Summary
Self-speculative decoding (SSD) accelerates LLM inference via layer-skipping to construct lightweight draft models, but its fixed skip-layer strategy suffers substantial performance degradation under domain shifts. To address this, we propose a KNN-driven dynamic domain-matching mechanismโthe first to integrate parameter-free K-nearest-neighbor search into SSD. Our method dynamically retrieves optimal layer-skip configurations in real time based on input representations, enabling zero-training, zero-parameter cross-domain adaptation. Crucially, it requires no architectural modification to the backbone model or additional fine-tuning. Extensive experiments across multiple LLMs (Llama-2/3, Qwen) and diverse tasks (commonsense reasoning, code generation, mathematical QA) demonstrate consistent inference speedups of 1.3รโ1.6ร. Moreover, our approach significantly enhances SSDโs robustness and generalization under distribution shift, establishing a new paradigm for adaptive speculative decoding.
๐ Abstract
Speculative Decoding (SD) has emerged as a widely used paradigm to accelerate the inference of large language models (LLMs) without compromising generation quality. It works by efficiently drafting multiple tokens using a compact model and then verifying them in parallel using the target LLM. Notably, Self-Speculative Decoding proposes skipping certain layers to construct the draft model, which eliminates the need for additional parameters or training. Despite its strengths, we observe in this work that drafting with layer skipping exhibits significant sensitivity to domain shifts, leading to a substantial drop in acceleration performance. To enhance the domain generalizability of this paradigm, we introduce KNN-SSD, an algorithm that leverages K-Nearest Neighbor (KNN) search to match different skipped layers with various domain inputs. We evaluated our algorithm in various models and multiple tasks, observing that its application leads to 1.3x-1.6x speedup in LLM inference.