🤖 AI Summary
To address the slow inference speed of large language model (LLM) ensembles, this paper proposes Speculative Ensemble—a novel framework that extends speculative decoding to multi-model ensemble inference. It dynamically rotates models between “proposer” and “verifier” roles and models the verification process via the joint distribution of the proposer and the target ensemble. Theoretically, its inference speed is provably no worse than standard ensemble inference. The framework generalizes to *n*-model ensembles and introduces a collaborative verification mechanism alongside ensemble distribution modeling. Empirical evaluation across multiple tasks demonstrates 1.11×–2.23× inference speedup over standard ensembles, while maintaining statistically indistinguishable generation quality—measured by BLEU, ROUGE, and accuracy—thus achieving a favorable efficiency–performance trade-off.
📝 Abstract
Ensemble methods enhance Large Language Models (LLMs) by combining multiple models but suffer from high computational costs. In this paper, we introduce Speculative Ensemble, a novel framework that accelerates LLM ensembles without sacrificing performance, inspired by Speculative Decoding-where a small proposal model generates tokens sequentially, and a larger target model verifies them in parallel. Our approach builds on two key insights: (1) the verification distribution can be the ensemble distribution of both the proposal and target models, and (2) alternating each model as the proposer and verifier can further enhance efficiency. We generalize this method to ensembles with n models and theoretically prove that SE is never slower than a standard ensemble, typically achieving faster speed. Extensive experiments demonstrate speed improvements of 1.11x-2.23x over standard ensemble techniques without compromising generation quality. Our code is available at https://github.com/Kamichanw/Speculative-Ensemble/