Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inherent knowledge limitations of individual large language models (LLMs) and the high computational overhead of conventional model ensembling, this paper proposes Collaborative Speculative Decoding (CoSD)—a zero-training, inference-time framework that dynamically integrates complementary knowledge from multiple LLMs without fine-tuning or retraining. CoSD adopts a draft–verify architecture: a primary LLM generates initial outputs, while a lightweight, interpretable rule engine—implemented as a decision tree—dynamically determines whether to invoke an auxiliary LLM for refinement. Its core innovation is the first test-time, training-free, and human-interpretable knowledge fusion mechanism, balancing efficiency, cross-domain generalization, and scheduling transparency. Extensive evaluation across diverse benchmarks demonstrates that CoSD achieves up to a 10% average accuracy gain over state-of-the-art speculative decoding and ensemble methods, while maintaining low latency and resource overhead.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training. Thus, enabling LLMs to solve problems collaboratively by integrating their complementary knowledge promises to improve their performance across domains. To realize this potential, we introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time without requiring additional model training. CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts. CoSD not only enhances knowledge fusion but also improves inference efficiency, is transferable across domains and models, and offers greater explainability. Experimental results demonstrate that CoSD improves accuracy by up to 10% across benchmarks compared to existing methods, providing a scalable and effective solution for LLM-based applications
Problem

Research questions and friction points this paper is trying to address.

Enhance LLM performance across diverse domains
Enable collaborative knowledge fusion without retraining
Improve inference efficiency and explainability in decoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative Speculative Decoding
Knowledge fusion without training
Enhances accuracy and efficiency
🔎 Similar Papers
No similar papers found.
Z
Ziyao Wang
Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA
M
Muneeza Azmart
IBM Research, Yorktown Heights, NY, USA
A
Ang Li
Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA
R
R. Horesh
IBM Research, Yorktown Heights, NY, USA
Mikhail Yurochkin
Mikhail Yurochkin
Staff AI Scientist, IFM MBZUAI, ex MIT-IBM Watson AI Lab
Machine LearningFoundation ModelsEvaluationModel Fusion