🤖 AI Summary
To address the computational intractability of multi-preference optimization in large language model (LLM) self-play alignment—caused by an excessive number of candidate responses—this paper proposes an active subset selection framework. Built upon on-policy response generation, the method integrates active sampling leveraging response embeddings and reward scores, semantic clustering-based coverage, and a multi-preference group contrastive loss. Theoretically, it guarantees expected reward maximization while effectively uncovering overlooked semantic patterns and reward extrema. Unlike conventional uniform sampling or static filtering, this framework is the first to jointly model on-policy generation, active selection, and group-wise contrastive learning. Evaluated on AlpacaEval using Llama-3-8B, it achieves state-of-the-art performance (62.3% win rate), significantly improving training signal quality and alignment efficiency.
📝 Abstract
Multi-preference optimization enriches language-model alignment beyond pairwise preferences by contrasting entire sets of helpful and undesired responses, thereby enabling richer training signals for large language models. During self-play alignment, these models often produce numerous candidate answers per query, rendering it computationally infeasible to include all responses in the training objective. In this work, we propose $ extit{Active Multi-Preference Optimization}$ (AMPO), a novel approach that combines on-policy generation, a multi-preference group-contrastive loss, and active subset selection. Specifically, we score and embed large candidate pools of responses and then select a small, yet informative, subset that covers reward extremes and distinct semantic clusters for preference optimization. Our contrastive training scheme is capable of identifying not only the best and worst answers but also subtle, underexplored modes that are crucial for robust alignment. Theoretically, we provide guarantees for expected reward maximization using our active selection method, and empirically, AMPO achieves state-of-the-art results on $ extit{AlpacaEval}$ using Llama 8B.