🤖 AI Summary
This work addresses the semantic coverage limitation of sparse retrieval (REST) in speculative decoding, which stems from its reliance on short contexts and exact string matching. We propose the first dense-retrieval-based speculative decoding framework. Our method constructs a non-parametric token sequence repository using contextualized token embeddings and employs approximate nearest neighbor (ANN) search for semantic-driven, long-range candidate sequence retrieval. The core contribution is the novel integration of dense retrieval into speculative decoding, overcoming the inherent semantic constraints of conventional sparse approaches. Experimental results demonstrate that, compared to REST, our method achieves an 87% average increase in acceptance rate, a 65% increase in average accepted token length per speculation step, and a 19% improvement in end-to-end generation latency.
📝 Abstract
Speculative decoding (SD) accelerates Large Language Model (LLM) generation by using an efficient draft model to propose the next few tokens, which are verified by the LLM in a single forward call, reducing latency while preserving its outputs. We focus on retrieval-based SD where the draft model retrieves the next tokens from a non-parametric datastore. Sparse retrieval (REST), which operates on the surface form of strings, is currently the dominant paradigm due to its simplicity and scalability. However, its effectiveness is limited due to the usage of short contexts and exact string matching. Instead, we introduce Dense Retrieval for Speculative Decoding (DReSD), a novel framework that uses approximate nearest neighbour search with contextualised token embeddings to retrieve the most semantically relevant token sequences for SD. Extensive experiments show that DReSD achieves (on average) 87% higher acceptance rates, 65% longer accepted tokens and 19% faster generation speeds compared to sparse retrieval (REST).