🤖 AI Summary
This work addresses the challenge of efficiently searching the combinatorially vast prompt space of language models, which is exacerbated by sparse reward signals. The authors frame prompt optimization as a posterior inference problem over latent prompts, guided by a meta-prompt prior. They propose an off-policy optimization framework based on Generative Flow Networks (GFlowNets), enhanced with a replay buffer and a priority queue to balance exploration and exploitation. A novel training-free dynamic memory update mechanism is introduced to focus sampling on high-reward regions without additional learning overhead. Empirical evaluations across few-shot classification, instruction induction, and question answering tasks demonstrate that the proposed method significantly outperforms existing discrete prompt optimization approaches.
📝 Abstract
Finding effective prompts for language models (LMs) is critical yet notoriously difficult: the prompt space is combinatorially large, rewards are sparse due to expensive target-LM evaluation. Yet, existing RL-based prompt optimizers often rely on on-policy updates and a meta-prompt sampled from a fixed distribution, leading to poor sample efficiency. We propose GFlowPO, a probabilistic prompt optimization framework that casts prompt search as a posterior inference problem over latent prompts regularized by a meta-prompted reference-LM prior. In the first step, we fine-tune a lightweight prompt-LM with an off-policy Generative Flow Network (GFlowNet) objective, using a replay-based training policy that reuses past prompt evaluations to enable sample-efficient exploration. In the second step, we introduce Dynamic Memory Update (DMU), a training-free mechanism that updates the meta-prompt by injecting both (i) diverse prompts from a replay buffer and (ii) top-performing prompts from a small priority queue, thereby progressively concentrating the search process on high-reward regions. Across few-shot text classification, instruction induction benchmarks, and question answering tasks, GFlowPO consistently outperforms recent discrete prompt optimization baselines.