🤖 AI Summary
Existing multi-sample inference methods—such as self-consistency and Best-of-N sampling—rely on auxiliary models or external knowledge for draft generation, introducing computational overhead and architectural complexity.
Method: This paper proposes a consensus-driven speculative decoding framework that eliminates auxiliary models by generating multiple reasoning paths in parallel and leveraging their intrinsic agreement—both in token-level probability distributions and structural patterns—to dynamically aggregate high-acceptance draft tokens via an adaptive probability fusion mechanism.
Contribution/Results: It is the first work to extend speculative decoding to multi-sample inference, achieving paradigm-level innovation through model-free draft generation. Evaluated on mathematical reasoning benchmarks, the method significantly improves draft token acceptance rates, reduces speculation latency, and achieves substantial end-to-end inference speedup without compromising accuracy.
📝 Abstract
We propose a novel speculative decoding method tailored for multi-sample reasoning scenarios, such as self-consistency and Best-of-N sampling. Our method exploits the intrinsic consensus of parallel generation paths to synthesize high-quality draft tokens without requiring auxiliary models or external databases. By dynamically analyzing structural patterns across parallel reasoning paths through a probabilistic aggregation mechanism, it identifies consensus token sequences that align with the decoding distribution. Evaluations on mathematical reasoning benchmarks demonstrate a substantial improvement in draft acceptance rates over baselines, while reducing the latency in draft token construction. This work establishes a paradigm shift for efficient multi-sample inference, enabling seamless integration of speculative decoding with sampling-based reasoning techniques.