Larger or Smaller Reward Margins to Select Preferences for Alignment?

๐Ÿ“… 2025-02-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing preference dataset evaluation metrics suffer from inconsistency, hindering effective value alignment of large language models (LLMs). Method: We propose โ€œalignment potential,โ€ a novel metric that quantifies the discrepancy between a modelโ€™s implicit reward margin and the target explicit reward margin, thereby unifying explicit and implicit evaluation standards. Our approach integrates reward margin modeling, dynamic preference filtering, and self-play-based data generation to enable continuous quality assessment and multi-objective alignment optimization during training. Contribution/Results: Experiments demonstrate that alignment potential significantly outperforms existing evaluation methods across diverse base models and alignment objectives. Within the self-play paradigm, it achieves new state-of-the-art performance; moreover, alignment quality improves steadily with increasing data scale and training iterations. The framework provides both an interpretable theoretical foundation and a practical, optimization-friendly pathway for constructing high-quality preference datasets.

Technology Category

Application Category

๐Ÿ“ Abstract
Preference learning is critical for aligning large language models (LLMs) with human values, with the quality of preference datasets playing a crucial role in this process. While existing metrics primarily assess data quality based on either explicit or implicit reward margins, they often provide contradictory evaluations for the same data. To address this issue, we introduce the alignment potential metric, which quantifies the gap from the model's current implicit reward margin to the target explicit reward margin, thereby estimating the model's potential to align with the preference data. Empirical results demonstrate that training on data selected by this metric consistently enhances alignment performance, surpassing existing metrics across different base models and optimization objectives. Furthermore, our method extends to self-play data generation frameworks, where the metric is used to identify high-quality data within the self-generated content by LLMs. Under this data generation scenario, our method surpasses current state-of-the-art (SOTA) results across various training settings and demonstrates continuous improvements in alignment performance as dataset size and training iterations increase.
Problem

Research questions and friction points this paper is trying to address.

Assessing data quality for aligning LLMs with human values.
Introducing a metric to quantify alignment potential in preference data.
Enhancing alignment performance through improved data selection and generation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces alignment potential metric for preference learning.
Enhances alignment performance using selected high-quality data.
Extends to self-play data generation for continuous improvement.
๐Ÿ”Ž Similar Papers
No similar papers found.