Efficient Reasoning for LLMs through Speculative Chain-of-Thought

📅 2025-04-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high latency induced by long chain-of-thought (CoT) reasoning in large language models (e.g., Deepseek-R1), this paper proposes Speculative Chain-of-Thought (SCoT), the first framework to apply speculative decoding at the CoT level. SCoT employs a lightweight draft model to generate multiple CoT candidates in parallel, which are then validated and corrected by the target large model. Key innovations include a thought-behavior alignment distillation mechanism and a quality-aware dynamic draft selection strategy—both designed to preserve complex reasoning accuracy while maximizing speedup. Evaluated on five mathematical reasoning benchmarks (including GSM8K and MATH), SCoT achieves 48–66% latency reduction over Deepseek-R1-Distill-Qwen-32B, while maintaining ≥98% of the target model’s accuracy.

Technology Category

Application Category

📝 Abstract
Large reasoning language models such as OpenAI-o1 and Deepseek-R1 have recently attracted widespread attention due to their impressive task-solving abilities. However, the enormous model size and the generation of lengthy thought chains introduce significant reasoning costs and response latency. Existing methods for efficient reasoning mainly focus on reducing the number of model parameters or shortening the chain-of-thought length. In this paper, we introduce Speculative Chain-of-Thought (SCoT), which reduces reasoning latency from another perspective by accelerated average reasoning speed through large and small model collaboration. SCoT conducts thought-level drafting using a lightweight draft model. Then it selects the best CoT draft and corrects the error cases with the target model. The proposed thinking behavior alignment improves the efficiency of drafting and the draft selection strategy maintains the prediction accuracy for complex problems. Experimental results on GSM8K, MATH, GaoKao, CollegeMath and Olympiad datasets show that SCoT reduces reasoning latency by 48%$sim$66% for Deepseek-R1-Distill-Qwen-32B while achieving near-target-model-level performance. Our code is available at https://github.com/Jikai0Wang/Speculative_CoT.
Problem

Research questions and friction points this paper is trying to address.

Reduces reasoning latency in large language models
Improves efficiency through draft model collaboration
Maintains accuracy while accelerating reasoning speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large and small model collaboration
Implements thought-level drafting with lightweight model
Corrects errors via target model for accuracy
🔎 Similar Papers
No similar papers found.