Ordinal Preference Optimization: Aligning Human Preferences via NDCG

📅 2024-10-06
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing Bradley–Terry–based alignment methods (e.g., DPO, RLHF) rely on pairwise comparisons, which inadequately capture the full ranking structure among multiple responses and thus limit alignment with diverse human preferences. This work introduces, for the first time, the information retrieval metric Normalized Discounted Cumulative Gain (NDCG) into LLM alignment, proposing a differentiable ordinal preference optimization framework. We construct a surrogate loss via a differentiable NDCG approximation to enable end-to-end multi-response ranking optimization. Furthermore, we integrate ordinal preference modeling with a negative-sample pool expansion strategy to mitigate interference from trivial negatives. Extensive evaluation on benchmarks—including AlpacaEval—demonstrates significant improvements over DPO and RLHF, validating the method’s effectiveness in enhancing both response quality and fidelity to human ordinal preferences.

Technology Category

Application Category

📝 Abstract
Aligning Large Language Models (LLMs) with diverse human preferences is a pivotal technique for controlling model behaviors and enhancing generation quality. Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), and their variants optimize language models by pairwise comparisons. However, when multiple responses are available, these approaches fall short of leveraging the extensive information in the ranking given by the reward models or human feedback. In this work, we propose a novel listwise approach named Ordinal Preference Optimization (OPO), which employs the Normalized Discounted Cumulative Gain (NDCG), a widely-used ranking metric, to better utilize relative proximity within ordinal multiple responses. We develop an end-to-end preference optimization algorithm by approximating NDCG with a differentiable surrogate loss. This approach builds a connection between ranking models in information retrieval and the alignment problem. In aligning multi-response datasets assigned with ordinal rewards, OPO outperforms existing pairwise and listwise approaches on evaluation sets and general benchmarks like AlpacaEval. Moreover, we demonstrate that increasing the pool of negative samples can enhance model performance by reducing the adverse effects of trivial negatives.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLMs with human preferences for controllable behaviors
Addressing limitations of pairwise methods in listwise ranking tasks
Improving ranking accuracy using NDCG-based alignment approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses listwise ranking with NDCG metric
Develops differentiable surrogate loss approximation
Outperforms pairwise methods on benchmarks
🔎 Similar Papers
No similar papers found.