RL-finetuning LLMs from on- and off-policy data with a single algorithm

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of jointly leveraging on-policy and off-policy data in reinforcement learning fine-tuning of large language models (LLMs). To this end, we propose AGRO—a novel algorithm grounded in the principle of generative consistency, which establishes a unified policy gradient optimization framework across heterogeneous policies. AGRO is the first method enabling joint training on both on-policy and off-policy data within a single algorithm. Theoretically, it provides convergence guarantees; technically, it integrates generative consistency modeling with policy gradient estimation, circumventing the high-variance issues inherent in conventional importance sampling. Empirical evaluation on mathematical reasoning tasks demonstrates that AGRO significantly outperforms mainstream baselines—including PPO and GRPO—validating its effectiveness, robustness, and generalization capability in cross-policy data co-training.

Technology Category

Application Category

📝 Abstract
We introduce a novel reinforcement learning algorithm (AGRO, for Any-Generation Reward Optimization) for fine-tuning large-language models. AGRO leverages the concept of generation consistency, which states that the optimal policy satisfies the notion of consistency across any possible generation of the model. We derive algorithms that find optimal solutions via the sample-based policy gradient and provide theoretical guarantees on their convergence. Our experiments demonstrate the effectiveness of AGRO in both on-policy and off-policy settings, showing improved performance on the mathematical reasoning dataset over baseline algorithms.
Problem

Research questions and friction points this paper is trying to address.

Develops AGRO for RL-finetuning LLMs with one algorithm
Ensures generation consistency in optimal policy solutions
Improves performance on mathematical reasoning datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

AGRO optimizes LLMs with reinforcement learning
Uses generation consistency for policy optimization
Sample-based policy gradient ensures convergence
🔎 Similar Papers
2024-06-27Conference on Empirical Methods in Natural Language ProcessingCitations: 1