Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment

📅 2024-06-06
📈 Citations: 10
Influential: 1
📄 PDF
🤖 AI Summary
To address the high cost of human annotation in aligning large language models (LLMs) with human preferences, this paper proposes an iterative self-labeling framework that requires neither external reward models nor in-context learning. The method directly and explicitly extracts intrinsic preference signals from raw LLM logits—a novel approach—and integrates them with a noise-aware preference learning algorithm to enhance the robustness of self-generated labels. Evaluated on AlpacaEval 2.0, the framework achieves superior performance using only 3.3% of the real preference annotations from the UltraFeedback dataset, outperforming both the full-supervision baseline and existing state-of-the-art methods. The framework is computationally efficient, scalable, and theoretically interpretable, establishing a new paradigm for preference alignment under low-resource conditions.

Technology Category

Application Category

📝 Abstract
Aligning large language models (LLMs) with human preferences becomes a key component to obtaining state-of-the-art performance, but it yields a huge cost to construct a large human-annotated preference dataset. To tackle this problem, we propose a new framework, Spread Preference Annotation with direct preference judgment (SPA), that boosts the alignment of LLMs using only a very small amount of human-annotated preference data. Our key idea is leveraging the human prior knowledge within the small (seed) data and progressively improving the alignment of LLM, by iteratively generating the responses and learning from them with the self-annotated preference data. To be specific, we propose to derive the preference label from the logits of LLM to explicitly extract the model's inherent preference. Compared to the previous approaches using external reward models or implicit in-context learning, we observe that the proposed approach is significantly more effective. In addition, we introduce a noise-aware preference learning algorithm to mitigate the risk of low quality within generated preference data. Our experimental results demonstrate that the proposed framework significantly boosts the alignment of LLMs. For example, we achieve superior alignment performance on AlpacaEval 2.0 with only 3.3% of the ground-truth preference labels in the Ultrafeedback data compared to the cases using the entire data or state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Aligning LLMs with human preferences efficiently
Reducing cost of human-annotated preference datasets
Improving LLM alignment with minimal human data
Innovation

Methods, ideas, or system contributions that make the work stand out.

SPA framework uses minimal human-annotated data
Derives preference labels from LLM logits
Noise-aware algorithm improves preference learning
D
Dongyoung Kim
Korea Advanced Institute of Science and Technology
Kimin Lee
Kimin Lee
KAIST
Artificial IntelligenceReinforcement LearningDeep Learning
Jinwoo Shin
Jinwoo Shin
ICT Endowed Chair Professor
Machine LearningDeep Learning
J
Jaehyung Kim
Carnegie Mellon University