🤖 AI Summary
This work addresses the instability in existing automatic prompt optimization methods caused by noisy and conflicting update signals. To mitigate semantic conflicts and better characterize decision boundaries, the authors propose Boundary-Aware Contrastive Sampling (BACS) and Momentum-Guided Semantic Clustering (MGSC), which leverage triplet feature mining and a time-decayed textual momentum mechanism. By integrating batch-level statistics with gradient consistency analysis, the approach enhances optimization stability. Experimental results demonstrate that the method significantly outperforms state-of-the-art baselines such as PromptWizard and ProTeGi across multiple benchmarks, achieving average performance gains of 1.58% and 3.35%, respectively. Notably, it enables a general-purpose 3B-parameter model to surpass the performance of a specialized 70B-parameter model.
📝 Abstract
Automatic prompt optimization is a promising direction to boost the performance of Large Language Models (LLMs). However, existing methods often suffer from noisy and conflicting update signals. In this research, we propose C-MOP (Cluster-based Momentum Optimized Prompting), a framework that stabilizes optimization via Boundary-Aware Contrastive Sampling (BACS) and Momentum-Guided Semantic Clustering (MGSC). Specifically, BACS utilizes batch-level information to mine tripartite features--Hard Negatives, Anchors, and Boundary Pairs--to precisely characterize the typical representation and decision boundaries of positive and negative prompt samples. To resolve semantic conflicts, MGSC introduces a textual momentum mechanism with temporal decay that distills persistent consensus from fluctuating gradients across iterations. Extensive experiments demonstrate that C-MOP consistently outperforms SOTA baselines like PromptWizard and ProTeGi, yielding average gains of 1.58% and 3.35%. Notably, C-MOP enables a general LLM with 3B activated parameters to surpass a 70B domain-specific dense LLM, highlighting its effectiveness in driving precise prompt evolution. The code is available at https://github.com/huawei-noah/noah-research/tree/master/C-MOP.