Efficient and Accurate Prompt Optimization: the Benefit of Memory in Exemplar-Guided Reflection

📅 2024-11-12
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing automated prompt optimization methods suffer from two key limitations: (1) insufficient feedback utilization—relying solely on current-step feedback while ignoring potentially valuable historical feedback—and (2) suboptimal example retrieval—based only on generic semantic similarity, neglecting task-specific performance and prompt compatibility. To address these, we propose a memory-augmented, example-guided reflection mechanism featuring a novel dual-memory architecture that jointly models historical feedback and example generation. Our approach incorporates feedback-enhanced reflection, semantic-task co-retrieval of examples, and dynamic long-/short-term feedback memory updates—enabling feedback-driven, adaptive example guidance. Evaluated on the LIAR dataset, our method achieves a 10.1% F1-score improvement and reduces optimization steps by 50% on ProTeGi, significantly outperforming state-of-the-art baselines.

Technology Category

Application Category

📝 Abstract
Automatic prompt engineering aims to enhance the generation quality of large language models (LLMs). Recent works utilize feedbacks generated from erroneous cases to guide the prompt optimization. During inference, they may further retrieve several semantically-related exemplars and concatenate them to the optimized prompts to improve the performance. However, those works only utilize the feedback at the current step, ignoring historical and unseleccted feedbacks which are potentially beneficial. Moreover, the selection of exemplars only considers the general semantic relationship and may not be optimal in terms of task performance and matching with the optimized prompt. In this work, we propose an Exemplar-Guided Reflection with Memory mechanism (ERM) to realize more efficient and accurate prompt optimization. Specifically, we design an exemplar-guided reflection mechanism where the feedback generation is additionally guided by the generated exemplars. We further build two kinds of memory to fully utilize the historical feedback information and support more effective exemplar retrieval. Empirical evaluations show our method surpasses previous state-of-the-arts with less optimization steps, i.e., improving F1 score by 10.1 on LIAR dataset, and reducing half of the optimization steps on ProTeGi.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts for LLMs using historical feedback data
Improving exemplar selection for better task performance
Enhancing prompt efficiency with memory-guided reflection mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exemplar-guided reflection for feedback generation
Memory mechanisms for historical feedback utilization
Enhanced exemplar retrieval for optimized prompts
🔎 Similar Papers
No similar papers found.
C
Cilin Yan
Beihang University
J
Jingyun Wang
Beihang University
L
Lin Zhang
ByteDance China
Ruihui Zhao
Ruihui Zhao
ByteDance
Natural Language ProcessingFederated Machine Learning
X
Xiaopu Wu
ByteDance China
K
Kai Xiong
ByteDance China
Q
Qingsong Liu
ByteDance China
Guoliang Kang
Guoliang Kang
Professor, Beihang University
Deep learning and its applications
Yangyang Kang
Yangyang Kang
DAMO Academy, Alibaba Group
LLM NLP KG DL