🤖 AI Summary
This work addresses two key limitations in black-box large language model (LLM) prompt optimization: underutilization of correct prediction signals and poor cross-model transferability. To this end, we propose an enhanced feedback-driven prompt optimization framework. Methodologically, it introduces a dual-track reinforcement mechanism—retaining effective prompt components via both positive and negative signals—integrates text-gradient reconstruction, multi-signal feedback aggregation, and noise filtering, and incorporates an explicit prompt transfer strategy. Our key contribution is the first systematic integration of positive reinforcement learning into automated prompt optimization, enabling active exploitation of correct prediction information. Experiments demonstrate that our method consistently outperforms strong baselines on both standard prompt optimization and cross-model/cross-API transfer tasks, achieving simultaneous improvements in accuracy, convergence speed, and computational efficiency.
📝 Abstract
An increasing number of NLP applications interact with large language models (LLMs) through black-box APIs, making prompt engineering critical for controlling model outputs. While recent Automatic Prompt Optimization (APO) methods iteratively refine prompts using model-generated feedback, textual gradients, they primarily focus on error correction and neglect valuable insights from correct predictions. This limits both their effectiveness and efficiency. In this paper, we propose a novel APO framework centered on enhancing the feedback mechanism. We reinterpret the textual gradient as a form of negative reinforcement and introduce the complementary positive reinforcement to explicitly preserve beneficial prompt components identified through successful predictions. To mitigate the noise inherent in LLM-generated feedback, we introduce a technique called feedback diversification, which aggregates multiple feedback signals, emphasizing consistent, actionable advice while filtering out outliers. Motivated by the rapid evolution and diversity of available LLMs, we also formalize Continual Prompt Optimization (CPO), addressing the practical challenge of efficiently migrating optimized prompts between different model versions or API providers. Our experiments reveal that naive prompt migration often degrades performance due to loss of critical instructions. In contrast, our approach consistently outperforms strong baselines, achieving significant accuracy improvements, faster convergence, and lower computational costs in both standard and migration scenarios.