Tapered Off-Policy REINFORCE: Stable and efficient reinforcement learning for LLMs

๐Ÿ“… 2025-03-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address slow training speed, poor stability, reliance on KL regularization, and inefficient negative-sample utilization in reinforcement learning fine-tuning of large language models (LLMs), this paper proposes Tapered Off-Policy REINFORCE (TOPR). TOPR employs asymmetric tapered importance sampling to enable fully offline, unified modeling of positive and negative samples while eliminating KL regularization entirely. Its key contributions are: (i) the first tapered importance sampling mechanism; (ii) theoretical and empirical identification of the implicit policy-distribution regularization effect induced by the REINFORCE baseline under negative sampling; and (iii) the first effective integration of negative samples in offline RL, yielding substantial gains in validation accuracy. On GSM8K and MATH benchmarks, an 8B model trained with TOPR matches the performance of a KL-regularized 70B model, achieves significantly higher inference accuracy, improves data efficiency, eliminates โ€œinference wasteโ€ from negative samples, and supports multi-round iterative refinement with consistent gains.

Technology Category

Application Category

๐Ÿ“ Abstract
We propose a new algorithm for fine-tuning large language models using reinforcement learning. Tapered Off-Policy REINFORCE (TOPR) uses an asymmetric, tapered variant of importance sampling to speed up learning while maintaining stable learning dynamics, even without the use of KL regularization. TOPR can be applied in a fully offline fashion, allows the handling of positive and negative examples in a unified framework, and benefits from the implementational simplicity that is typical of Monte Carlo algorithms. We demonstrate the effectiveness of our approach with a series of experiments on the GSM8K and MATH reasoning benchmarks, finding performance gains for training both a model for solution generation and as a generative verifier. We show that properly leveraging positive and negative examples alike in the off-policy regime simultaneously increases test-time accuracy and training data efficiency, all the while avoiding the ``wasted inference'' that comes with discarding negative examples. We find that this advantage persists over multiple iterations of training and can be amplified by dataset curation techniques, enabling us to match 70B-parameter model performance with 8B language models. As a corollary to this work, we find that REINFORCE's baseline parameter plays an important and unexpected role in defining dataset composition in the presence of negative examples, and is consequently critical in driving off-policy performance.
Problem

Research questions and friction points this paper is trying to address.

Stable reinforcement learning for large language models
Efficient handling of positive and negative examples
Improved training data efficiency and test-time accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tapered Off-Policy REINFORCE for stable RL
Unified framework for positive-negative examples
Offline training with efficient data utilization
๐Ÿ”Ž Similar Papers
2024-06-27Conference on Empirical Methods in Natural Language ProcessingCitations: 1
Nicolas Le Roux
Nicolas Le Roux
McGill, UdeM
Machine LearningNeural networksDeep learningOptimization
Marc G. Bellemare
Marc G. Bellemare
Reliant AI
Reinforcement Learning
J
Jonathan Lebensold
Reliant AI
A
Arnaud Bergeron
Mila
J
Joshua Greaves
Reliant AI
A
Alex Fr'echette
Reliant AI
C
Carolyne Pelletier
Reliant AI
E
Eric Thibodeau-Laufer
Reliant AI
S
S'andor Toth
Reliant AI
S
Sam Work
Reliant AI