PhaseEvo: Towards Unified In-Context Prompt Optimization for Large Language Models

📅 2024-02-17
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Manual prompt engineering for large language models (LLMs) incurs high labor costs and suffers from suboptimal performance due to the decoupled optimization of instructions and in-context examples. Method: This paper proposes a unified in-context prompting optimization framework featuring a multi-stage evolutionary architecture. It introduces, for the first time, an LLM-driven semantic-aware mutation operator enabling efficient global search in discrete natural language space, coupled with prompt embedding reparameterization and a phased search strategy to jointly optimize instructions and examples. Contribution/Results: Evaluated on 35 benchmark tasks, the framework substantially outperforms state-of-the-art methods, achieving significant average performance gains while maintaining controllable computational overhead.

Technology Category

Application Category

📝 Abstract
Crafting an ideal prompt for Large Language Models (LLMs) is a challenging task that demands significant resources and expert human input. Existing work treats the optimization of prompt instruction and in-context learning examples as distinct problems, leading to sub-optimal prompt performance. This research addresses this limitation by establishing a unified in-context prompt optimization framework, which aims to achieve joint optimization of the prompt instruction and examples. However, formulating such optimization in the discrete and high-dimensional natural language space introduces challenges in terms of convergence and computational efficiency. To overcome these issues, we present PhaseEvo, an efficient automatic prompt optimization framework that combines the generative capability of LLMs with the global search proficiency of evolution algorithms. Our framework features a multi-phase design incorporating innovative LLM-based mutation operators to enhance search efficiency and accelerate convergence. We conduct an extensive evaluation of our approach across 35 benchmark tasks. The results demonstrate that PhaseEvo significantly outperforms the state-of-the-art baseline methods by a large margin whilst maintaining good efficiency.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts for LLMs is resource-intensive and requires expertise
Existing methods separate instruction and example optimization, causing incohesive prompts
Discrete, high-dimensional language space challenges convergence and efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cohesive optimization of prompts and examples
Metaheuristic principles for efficient convergence
Quad-phased design balancing exploration and exploitation
🔎 Similar Papers
No similar papers found.