ComPO: Preference Alignment via Comparison Oracles

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing direct preference alignment methods suffer from noise in preference pairs, leading to redundant outputs and likelihood shift that degrade alignment performance. To address this, we propose the first LLM preference alignment framework grounded in a comparison oracle, with theoretical convergence guarantees. We further establish— for the first time—that high-likelihood-difference preference pairs require specialized modeling: although sparse, they are critical for alignment quality. Our method integrates likelihood-difference analysis of preference pairs with heuristic robust optimization. Extensive evaluation across multiple benchmarks—including AlpacaEval 2, MT-Bench, and Arena-Hard—demonstrates consistent improvements. On Mistral-7B, Llama-3-8B, and Gemma-2-9B, our approach significantly outperforms state-of-the-art baselines, establishing a new, more robust and efficient paradigm for direct alignment under noisy preference data.

Technology Category

Application Category

📝 Abstract
Direct alignment methods are increasingly used for aligning large language models (LLMs) with human preferences. However, these methods suffer from the issues of verbosity and likelihood displacement, which can be driven by the noisy preference pairs that induce similar likelihood for preferred and dispreferred responses. The contributions of this paper are two-fold. First, we propose a new preference alignment method based on comparison oracles and provide the convergence guarantee for its basic scheme. Second, we improve our method using some heuristics and conduct the experiments to demonstrate the flexibility and compatibility of practical scheme in improving the performance of LLMs using noisy preference pairs. Evaluations are conducted across multiple base and instruction-tuned models (Mistral-7B, Llama-3-8B and Gemma-2-9B) with benchmarks (AlpacaEval 2, MT-Bench and Arena-Hard). Experimental results show the effectiveness of our method as an alternative to addressing the limitations of existing direct alignment methods. A highlight of our work is that we evidence the importance of designing specialized methods for preference pairs with distinct likelihood margin, which complements the recent findings in citet{Razin-2025-Unintentional}.
Problem

Research questions and friction points this paper is trying to address.

Addressing verbosity and likelihood displacement in LLM alignment
Improving alignment using comparison oracles and heuristics
Enhancing LLM performance with noisy preference pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preference alignment via comparison oracles
Convergence guarantee for basic scheme
Heuristics for noisy preference pairs
🔎 Similar Papers
No similar papers found.