Beyond Tokens: Semantic-Aware Speculative Decoding for Efficient Inference by Probing Internal States

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference latency of large language models in autoregressive decoding, a limitation exacerbated by existing speculative decoding methods that rely solely on token-level verification and ignore semantic equivalence. To overcome this, the authors propose SemanticSpec, a novel framework that introduces the first semantic-aware speculative decoding mechanism. By leveraging probing techniques to estimate the semantic probabilities encoded in model hidden states, SemanticSpec enables parallel speculation and verification at the level of semantic sequences rather than individual tokens. This paradigm shift transcends the constraints of conventional token-level approaches, achieving up to 2.7× and 2.1× speedup on DeepSeekR1-32B and QwQ-32B, respectively, substantially outperforming current token-level and sequence-level baselines.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) achieve strong performance across many tasks but suffer from high inference latency due to autoregressive decoding. The issue is exacerbated in Large Reasoning Models (LRMs), which generate lengthy chains of thought. While speculative decoding accelerates inference by drafting and verifying multiple tokens in parallel, existing methods operate at the token level and ignore semantic equivalence (i.e., different token sequences expressing the same meaning), leading to inefficient rejections. We propose SemanticSpec, a semantic-aware speculative decoding framework that verifies entire semantic sequences instead of tokens. SemanticSpec introduces a semantic probability estimation mechanism that probes the model's internal hidden states to assess the likelihood of generating sequences with specific meanings. Experiments on four benchmarks show that SemanticSpec achieves up to 2.7x speedup on DeepSeekR1-32B and 2.1x on QwQ-32B, consistently outperforming token-level and sequence-level baselines in both efficiency and effectiveness.
Problem

Research questions and friction points this paper is trying to address.

speculative decoding
semantic equivalence
inference latency
large reasoning models
autoregressive decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

semantic-aware decoding
speculative decoding
hidden state probing
semantic equivalence
efficient LLM inference
🔎 Similar Papers
No similar papers found.