Aligning Language Models Using Follow-up Likelihood as Reward Signal

📅 2024-09-20
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Current language model alignment methods rely heavily on manually or LLM-annotated preference data, which is costly and prone to annotation bias. Method: We propose an unsupervised reward modeling framework that eliminates explicit labeling by leveraging users’ natural conversational behavior—specifically, defining the generation probability of follow-up utterances as an intrinsic reward signal (Follow-up Likelihood Reward, FLR). Our approach integrates FLR with automated preference data distillation, Direct Preference Optimization (DPO), and natural-language-feedback-driven reward model fine-tuning. Contribution/Results: FLR matches or exceeds GPT-4–annotated baselines across eight pairwise preference and four rating benchmarks. It substantially improves model helpfulness, and subsequent feedback-guided reward model fine-tuning further enhances human-AI alignment. This work establishes a novel paradigm for automatic preference mining and reward modeling grounded in implicit user behavioral signals.

Technology Category

Application Category

📝 Abstract
In natural human-to-human conversations, participants often receive feedback signals from one another based on their follow-up reactions. These reactions can include verbal responses, facial expressions, changes in emotional state, and other non-verbal cues. Similarly, in human-machine interactions, the machine can leverage the user's follow-up utterances as feedback signals to assess whether it has appropriately addressed the user's request. Therefore, we propose using the likelihood of follow-up utterances as rewards to differentiate preferred responses from less favored ones, without relying on human or commercial LLM-based preference annotations. Our proposed reward mechanism, ``Follow-up Likelihood as Reward"(FLR), matches the performance of strong reward models trained on large-scale human or GPT-4 annotated data on 8 pairwise-preference and 4 rating-based benchmarks. Building upon the FLR mechanism, we propose to automatically mine preference data from the online generations of a base policy model. The preference data are subsequently used to boost the helpfulness of the base model through direct alignment from preference (DAP) methods, such as direct preference optimization (DPO). Lastly, we demonstrate that fine-tuning the language model that provides follow-up likelihood with natural language feedback significantly enhances FLR's performance on reward modeling benchmarks and effectiveness in aligning the base policy model's helpfulness.
Problem

Research questions and friction points this paper is trying to address.

Aligning language models without human annotations
Using follow-up likelihood as reward signal
Enhancing model helpfulness through automatic preference mining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses follow-up likelihood as reward signal
Mines preference data from model generations
Enhances model with direct preference optimization
🔎 Similar Papers
No similar papers found.
C
Chen Zhang
National University of Singapore, The Chinese University of Hong Kong (Shenzhen), China, Shenzhen Research Institute of Big Data, China
Dading Chong
Dading Chong
Peking university
Multidal representationLarge language modelMultimodal recommendation
F
Feng Jiang
The Chinese University of Hong Kong (Shenzhen), China, Shenzhen Research Institute of Big Data, China
C
Chengguang Tang
Tencent AI Lab, China, Shenzhen Research Institute of Big Data, China
A
Anningzhe Gao
Shenzhen Research Institute of Big Data, China
G
Guohua Tang
Tencent AI Lab, China, Shenzhen Research Institute of Big Data, China
Haizhou Li
Haizhou Li
The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China; NUS, Singapore
Automatic Speech RecognitionSpeaker RecognitionLanguage RecognitionVoice ConversionMachine Translation