Contrastive Policy Gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion

πŸ“… 2024-06-27
πŸ›οΈ Conference on Empirical Methods in Natural Language Processing
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing LLM alignment methods struggle to efficiently leverage arbitrary sequence-level rewards (e.g., unit tests, textual entailment) and rely on high-cost online sampling and importance weighting. To address this, we propose Contrastive Policy Gradient (CoPG), a novel offline policy gradient algorithm that eliminates the need for importance sampling. CoPG is the first to integrate contrastive learning into the policy gradient framework, unifying Implicit Preference Optimization (IPO) and classical policy gradient methods. It introduces a state-dependent baseline calibration mechanism, enabling unbiased gradient estimation from offline, off-policy data. Experiments demonstrate that CoPG significantly outperforms PPO and IPO on summarization tasks; ablation studies on a toy bandit environment confirm its gradient accuracy and training stability. Moreover, CoPG reduces computational overhead by over 35%, achieving both generality across reward types and superior training efficiency.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement Learning (RL) has been used to finetune Large Language Models (LLMs) using a reward model trained from preference data, to better align with human judgment. The recently introduced direct alignment methods, which are often simpler, more stable, and computationally lighter, can more directly achieve this. However, these approaches cannot optimize arbitrary rewards, and the preference-based ones are not the only rewards of interest for LLMs (eg, unit tests for code generation or textual entailment for summarization, among others). RL-finetuning is usually done with a variation of policy gradient, which calls for on-policy or near-on-policy samples, requiring costly generations. We introduce *Contrastive Policy Gradient*, or CoPG, a simple and mathematically principled new RL algorithm that can estimate the optimal policy even from off-policy data. It can be seen as an off-policy policy gradient approach that does not rely on important sampling techniques and highlights the importance of using (the right) state baseline. We show this approach to generalize the direct alignment method IPO (identity preference optimization) and classic policy gradient. We experiment with the proposed CoPGon a toy bandit problem to illustrate its properties, as well as for finetuning LLMs on a summarization task, using a learned reward function considered as ground truth for the purpose of the experiments.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
Large Language Models
Computational Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive Policy Gradient
Large Language Model Optimization
Reinforcement Learning for Sequential Data
πŸ”Ž Similar Papers
No similar papers found.