$ exttt{SPECS}$: Faster Test-Time Scaling through Speculative Drafts

📅 2025-06-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing test-time compute scaling methods focus on the FLOPS–accuracy trade-off while neglecting end-to-end latency constraints, thereby degrading user experience. This paper introduces the first latency-aware speculative decoding framework. Our method replaces hard rejection with a reward-guided soft verification mechanism; employs a reward-driven dynamic latency decision policy, theoretically proven to converge to a KL-regularized reinforcement learning objective; and proposes a tri-model collaborative architecture integrating a draft model, a target model, and a dedicated reward model—enabling multi-signal verification and dynamic candidate pruning. Evaluated on benchmarks including MATH500, our approach matches or surpasses beam search in accuracy while reducing end-to-end latency by up to 19.1%.

Technology Category

Application Category

📝 Abstract
Scaling test-time compute has driven the recent advances in the reasoning capabilities of large language models (LLMs), typically by allocating additional computation for more thorough exploration. However, increased compute often comes at the expense of higher user-facing latency, directly impacting user experience. Current test-time scaling methods primarily optimize for accuracy based on total compute resources (FLOPS), often overlooking latency constraints. To address this gap, we propose $ exttt{SPECS}$, a latency-aware test-time scaling method inspired by speculative decoding. $ exttt{SPECS}$~uses a smaller, faster model to generate candidate sequences efficiently, and evaluates these candidates using signals from both a larger target model and a dedicated reward model. We introduce new integration strategies, including reward-guided soft verification and a reward-based deferral mechanism. Empirical results on MATH500, AMC23 and OlympiadBench datasets show that $ exttt{SPECS}$~matches or surpasses beam search accuracy while reducing latency by up to $sim$19.1%. Our theoretical analysis shows that our algorithm converges to the solution of a KL-regularized reinforcement learning objective with increasing beam width.
Problem

Research questions and friction points this paper is trying to address.

Reducing latency in test-time scaling for LLMs
Balancing accuracy and compute efficiency
Optimizing draft generation with speculative decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses smaller model for efficient candidate generation
Integrates reward-guided soft verification strategy
Reduces latency while maintaining accuracy
🔎 Similar Papers
No similar papers found.