Accelerating Large Language Model Reasoning via Speculative Search

📅 2025-05-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from substantial latency in tree-search-based reasoning due to frequent generation of intermediate thoughts, hindering practical deployment. This paper proposes Speculative Search, the first framework enabling dual-granularity speculative inference—jointly operating at both the thought level and token level. A small model generates candidate thought paths, while the large model dynamically evaluates their quality and performs rejection sampling, guaranteeing that retained thoughts meet or exceed the quality of those self-generated by the large model. We introduce a novel quality-preserving rejection mechanism that rigorously aligns thought-level fidelity with the large model’s standards. Evaluated on Qwen and Llama series models, Speculative Search achieves up to 2.12× speedup in reasoning latency, maintains reasoning quality comparable to the baseline large model, and significantly outperforms existing state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Tree-search-based reasoning methods have significantly enhanced the reasoning capability of large language models (LLMs) by facilitating the exploration of multiple intermediate reasoning steps, i.e., thoughts. However, these methods suffer from substantial inference latency, as they have to generate numerous reasoning thoughts, severely limiting LLM applicability. To address this challenge, we propose a novel Speculative Search (SpecSearch) framework that significantly accelerates LLM reasoning by optimizing thought generation. Specifically, SpecSearch utilizes a small model to strategically collaborate with a large model at both thought and token levels, efficiently generating high-quality reasoning thoughts. The major pillar of SpecSearch is a novel quality-preserving rejection mechanism, which effectively filters out thoughts whose quality falls below that of the large model's outputs. Moreover, we show that SpecSearch preserves comparable reasoning quality to the large model. Experiments on both the Qwen and Llama models demonstrate that SpecSearch significantly outperforms state-of-the-art approaches, achieving up to 2.12$ imes$ speedup with comparable reasoning quality.
Problem

Research questions and friction points this paper is trying to address.

Reducing inference latency in tree-search-based LLM reasoning
Optimizing thought generation to accelerate LLM reasoning
Maintaining reasoning quality while speeding up LLM outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Small model collaborates with large model
Quality-preserving rejection mechanism filters thoughts
Achieves 2.12x speedup with comparable quality
🔎 Similar Papers
No similar papers found.
Zhihai Wang
Zhihai Wang
Qwen Team, Phd, USTC
Sample-Efficient Reinforcement LearningRL4LLMAgentic RL
J
Jie Wang
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
J
Jilai Pan
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
Xilin Xia
Xilin Xia
MoE Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China
H
Huiling Zhen
Noah’s Ark Lab, Huawei Technologies
M
Mingxuan Yuan
Noah’s Ark Lab, Huawei Technologies
Jianye Hao
Jianye Hao
Huawei Noah's Ark Lab/Tianjin University
Multiagent SystemsEmbodied AI
Feng Wu
Feng Wu
National University of Singapore
Mechine LearningMedical Time Series