🤖 AI Summary
This work addresses a critical limitation in existing automatic prompt optimization methods, which typically rely on randomly sampled evaluation subsets without systematic investigation into the selection of high-quality subsets, thereby constraining optimization efficacy. For the first time, the problem is formally modeled as a set function maximization task, and it is proven that under mild conditions, the objective function exhibits monotonicity and submodularity. Leveraging these properties, the authors propose SESS, an efficient greedy-based subset selection method with theoretical guarantees. Experimental results on benchmarks including GSM8K, MATH, and GPQA-Diamond demonstrate that subsets selected by SESS significantly outperform those chosen by random and heuristic baselines, leading to substantial improvements in prompt optimization performance.
📝 Abstract
Automatic prompt optimization reduces manual prompt engineering, but relies on task performance measured on a small, often randomly sampled evaluation subset as its main source of feedback signal. Despite this, how to select that evaluation subset is usually treated as an implementation detail. We study evaluation subset selection for prompt optimization from a principled perspective and propose SESS, a submodular evaluation subset selection method. We frame selection as maximizing an objective set function and show that, under mild conditions, it is monotone and submodular, enabling greedy selection with theoretical guarantees. Across GSM8K, MATH, and GPQA-Diamond, submodularly selected evaluation subsets can yield better optimized prompts than random or heuristic baselines.