M-Ped: Multi-Prompt Ensemble Decoding for Large Language Models

📅 2024-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address unstable generation quality of large language models (LLMs) across diverse instruction-following tasks—such as machine translation, code generation, and text simplification—this paper proposes a zero-shot multi-prompt ensemble decoding method. The approach introduces two key innovations: (1) an Inner-Batch Ensemble mechanism that concurrently samples from *n* prompt variants for the same input and performs probability-level averaging over token prediction distributions within a single batch; and (2) a left-padding length-alignment strategy to ensure sequence alignment and maximize computational efficiency. Crucially, the method requires no fine-tuning and introduces no additional parameters. Experiments demonstrate substantial improvements in decoding robustness and cross-task generalization, achieving statistically significant gains over strong baselines across multiple metrics—including BLEU for translation, pass@k for code generation, and LENS for text simplification—thereby validating its effectiveness and practical utility.

Technology Category

Application Category

📝 Abstract
With the widespread application of Large Language Models (LLMs) in the field of Natural Language Processing (NLP), enhancing their performance has become a research hotspot. This paper presents a novel multi-prompt ensemble decoding approach designed to bolster the generation quality of LLMs by leveraging the aggregation of outcomes from multiple prompts. Given a unique input $X$, we submit $n$ variations of prompts with $X$ to LLMs in batch mode to decode and derive probability distributions. For each token prediction, we calculate the ensemble probability by averaging the $n$ probability distributions within the batch, utilizing this aggregated probability to generate the token. This technique is dubbed Inner-Batch Ensemble. To facilitate efficient batch inference, we implement a Left-Padding strategy to maintain uniform input lengths across the n prompts. Through extensive experimentation on diverse NLP tasks, including machine translation, code generation, and text simplification, we demonstrate the efficacy of our method in enhancing LLM performance. The results show substantial improvements in BLEU scores, pass@$k$ rates, and LENS metrics over conventional methods.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Multitask Instruction
Natural Language Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Prompt Ensemble Decoding
Large Language Models
Natural Language Tasks
🔎 Similar Papers
No similar papers found.
J
Jiaxin Guo
Huawei Translation Services Center, Beijing, China
D
Daimeng Wei
Huawei Translation Services Center, Beijing, China
Yuanchang Luo
Yuanchang Luo
2012@Huawei
Shimin Tao
Shimin Tao
2012 Lab, Huawei co. LTD
Machine Translation AIOps Log Analysis
H
Hengchao Shang
Huawei Translation Services Center, Beijing, China
Zongyao Li
Zongyao Li
Huawei Translation Services Center, Beijing, China
Shaojun Li
Shaojun Li
Engineer, 2012 Lab, Huawei Co. LTD
J
Jinlong Yang
Huawei Translation Services Center, Beijing, China
Zhanglin Wu
Zhanglin Wu
2012 Lab, Huawei Co. LTD
Machine TranslationNatural Language Processing
Zhiqiang Rao
Zhiqiang Rao
Huawei
NLP
H
Hao Yang
Huawei Translation Services Center, Beijing, China