Speculative Ensemble: Fast Large Language Model Ensemble via Speculation

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow inference speed of large language model (LLM) ensembles, this paper proposes Speculative Ensemble—a novel framework that extends speculative decoding to multi-model ensemble inference. It dynamically rotates models between “proposer” and “verifier” roles and models the verification process via the joint distribution of the proposer and the target ensemble. Theoretically, its inference speed is provably no worse than standard ensemble inference. The framework generalizes to *n*-model ensembles and introduces a collaborative verification mechanism alongside ensemble distribution modeling. Empirical evaluation across multiple tasks demonstrates 1.11×–2.23× inference speedup over standard ensembles, while maintaining statistically indistinguishable generation quality—measured by BLEU, ROUGE, and accuracy—thus achieving a favorable efficiency–performance trade-off.

Technology Category

Application Category

📝 Abstract
Ensemble methods enhance Large Language Models (LLMs) by combining multiple models but suffer from high computational costs. In this paper, we introduce Speculative Ensemble, a novel framework that accelerates LLM ensembles without sacrificing performance, inspired by Speculative Decoding-where a small proposal model generates tokens sequentially, and a larger target model verifies them in parallel. Our approach builds on two key insights: (1) the verification distribution can be the ensemble distribution of both the proposal and target models, and (2) alternating each model as the proposer and verifier can further enhance efficiency. We generalize this method to ensembles with n models and theoretically prove that SE is never slower than a standard ensemble, typically achieving faster speed. Extensive experiments demonstrate speed improvements of 1.11x-2.23x over standard ensemble techniques without compromising generation quality. Our code is available at https://github.com/Kamichanw/Speculative-Ensemble/
Problem

Research questions and friction points this paper is trying to address.

Accelerates LLM ensemble methods
Reduces computational costs significantly
Maintains high generation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Decoding for LLM
Alternating model roles
Enhanced ensemble efficiency
🔎 Similar Papers
No similar papers found.
Jiale Fu
Jiale Fu
Southeast University
speculative decodingLLM reasoning
Yuchu Jiang
Yuchu Jiang
Southeast University
Large Language ModelsComputer Vision
J
Junkai Chen
Southeast University, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
J
Jiaming Fan
Southeast University, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
Xin Geng
Xin Geng
School of Computer Science and Engineering, Southeast University
Artificial IntelligencePattern RecognitionMachine Learning
X
Xu Yang
Southeast University, Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China