Data-Centric Human Preference Optimization with Rationales

📅 2024-07-19
🏛️ arXiv.org
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Existing preference datasets lack explicit modeling of human decision rationales, resulting in inefficient alignment learning, susceptibility to redundant expressions and hallucinations, and prohibitively high annotation costs. This paper proposes a data-centric rationale-augmented paradigm: (1) it is the first to systematically validate the efficacy of free large language models (LLMs) for self-generating decision rationales in preference learning; (2) it introduces a lightweight, general-purpose rationale injection framework that requires no additional human annotation or model fine-tuning and is compatible with mainstream algorithms such as DPO and KTO; and (3) it jointly optimizes rationale-guided contrastive learning and supervised fine-tuning. Experiments demonstrate substantial improvements in data efficiency and training convergence speed, consistent reductions in hallucination rates and redundant outputs across multiple benchmarks, and state-of-the-art performance gains.

Technology Category

Application Category

📝 Abstract
Reinforcement learning from human feedback plays a crucial role in aligning language models towards human preferences, traditionally represented through comparisons between pairs or sets of responses within a given context. While many studies have enhanced algorithmic techniques to optimize learning from such data, this work shifts focus to improving preference learning through a data-centric approach. Specifically, we propose enriching existing preference datasets with machine-generated rationales that explain the reasons behind choices. We develop a simple and principled framework to augment current preference learning methods with rationale information. Our comprehensive analysis highlights how rationales enhance learning efficiency. Extensive experiments reveal that rationale-enriched preference learning offers multiple advantages: it improves data efficiency, accelerates convergence to higher-performing models, and reduces verbosity bias and hallucination. Furthermore, this framework is versatile enough to integrate with various preference optimization algorithms. Overall, our findings highlight the potential of re-imagining data design for preference learning, demonstrating that even freely available machine-generated rationales can significantly boost performance across multiple dimensions. The code repository is available at https: //github.com/reds-lab/preference-learning-with-rationales
Problem

Research questions and friction points this paper is trying to address.

Enhancing human preference learning with explanatory rationales
Addressing ambiguity in standard preference datasets for better alignment
Improving model performance and convergence through data augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augmenting preference pairs with rationales
Leveraging machine-generated rationales for enrichment
Improving model alignment via data design
🔎 Similar Papers
No similar papers found.