ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) heavily rely on manually crafted prompts, while existing automatic prompt optimization (APO) methods suffer from weak generalizability and poor performance on complex tasks. To address these limitations, this paper proposes ELPO—an Ensemble Learning-based Prompt Optimization framework. ELPO innovatively integrates multi-strategy collaborative search—including evolutionary algorithms and trial-and-error mechanisms—with a shared generation mechanism, and introduces a voting-based ensemble strategy to enhance the robustness and adaptability of prompt generation. Evaluated on multi-task benchmarks including ArSarcasm, ELPO consistently outperforms state-of-the-art APO methods, achieving a 7.6-point improvement in F1 score. This demonstrates substantial advances in accuracy, stability, and cross-task generalization capability.

Technology Category

Application Category

📝 Abstract
The remarkable performance of Large Language Models (LLMs) highly relies on crafted prompts. However, manual prompt engineering is a laborious process, creating a core bottleneck for practical application of LLMs. This phenomenon has led to the emergence of a new research area known as Automatic Prompt Optimization (APO), which develops rapidly in recent years. Existing APO methods such as those based on evolutionary algorithms or trial-and-error approaches realize an efficient and accurate prompt optimization to some extent. However, those researches focus on a single model or algorithm for the generation strategy and optimization process, which limits their performance when handling complex tasks. To address this, we propose a novel framework called Ensemble Learning based Prompt Optimization (ELPO) to achieve more accurate and robust results. Motivated by the idea of ensemble learning, ELPO conducts voting mechanism and introduces shared generation strategies along with different search methods for searching superior prompts. Moreover, ELPO creatively presents more efficient algorithms for the prompt generation and search process. Experimental results demonstrate that ELPO outperforms state-of-the-art prompt optimization methods across different tasks, e.g., improving F1 score by 7.6 on ArSarcasm dataset.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts automatically for large language models
Overcoming limitations of single-method prompt optimization approaches
Enhancing accuracy and robustness in complex task performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble learning framework for prompt optimization
Voting mechanism with shared generation strategies
Efficient algorithms for prompt generation and search
🔎 Similar Papers
No similar papers found.
Q
Qing Zhang
ByteDance, China
B
Bing Xu
ByteDance, China
X
Xudong Zhang
ByteDance, China
Yifan Shi
Yifan Shi
Graduate Student of Tsinghua University
Distributed OptimizationFederated LearningEfficient LLMs
Y
Yang Li
ByteDance, China
C
Chen Zhang
Department of Electrical and Electronic Engineering, The University of Hong Kong, HKSAR, China
Y
Yik Chung Wu
Department of Electrical and Electronic Engineering, The University of Hong Kong, HKSAR, China
N
Ngai Wong
Department of Electrical and Electronic Engineering, The University of Hong Kong, HKSAR, China
Yijie Chen
Yijie Chen
Professor of Wenzhou Medical University, Postdoc researcher in UCSD, Ph.D. in SJTU
NanomedicineDetoxificationVaccination
H
Hong Dai
ByteDance, China
X
Xiansen Chen
ByteDance, China
Mian Zhang
Mian Zhang
University of Texas at Dallas
LLM