Annotation-Efficient Preference Optimization for Language Model Alignment

📅 2024-05-22
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the high cost of preference annotation and the trade-off between data quality and diversity in aligning large language models (LLMs), this paper proposes the Annotation-Efficient Preference Optimization (AEPO) framework. AEPO jointly models response selection and preference annotation, leveraging information gain and diversity as optimization criteria. It employs response quality assessment, diversity quantification, and submodular optimization to actively select high-quality, representative response subsets. Integrated with Direct Preference Optimization (DPO), AEPO achieves superior alignment performance under identical annotation budgets: it improves average win rates by 8.3% over standard DPO while reducing annotation costs by over 40%. The core innovation lies in abandoning the conventional full-annotation paradigm, enabling synergistic optimization of annotation efficiency and model performance.

Technology Category

Application Category

📝 Abstract
Preference optimization is a standard approach to fine-tuning large language models to align with human preferences. The quality, diversity, and quantity of the preference dataset are critical to the effectiveness of preference optimization. However, obtaining a large amount of high-quality and diverse preference annotations is difficult in many applications. This raises the question of how to use the limited annotation budget to create an effective preference dataset. To this end, we propose Annotation-Efficient Preference Optimization (AEPO). Instead of exhaustively annotating preference over all available response texts, AEPO selects a subset of responses that maximizes quality and diversity from the available responses, and then annotates preference over the selected ones. In this way, AEPO focuses the annotation budget on labeling preference over a smaller subset of responses with diversity and of high quality. We evaluate the performance of Direct Preference Optimization (DPO) using AEPO and show that it outperforms models trained using a standard DPO with the same annotation budget. Our code is available at https://github.com/CyberAgentAILab/annotation-efficient-po
Problem

Research questions and friction points this paper is trying to address.

Optimizing language model alignment with limited annotation budget
Selecting diverse representative responses for preference annotation
Improving preference learning efficiency through strategic data selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selects diverse representative response subset
Annotates preferences on informative smaller set
Maximizes annotation budget efficiency effectively
🔎 Similar Papers
No similar papers found.