SOUP: Token-level Single-sample Mix-policy Reinforcement Learning for Large Language Models

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited exploration and premature convergence commonly observed in reinforcement learning for large language models, which often stem from insufficient sampling diversity. Existing hybrid policy approaches suffer from policy mismatch and training instability due to trajectory-level mixing. To overcome these issues, we propose SOUP, a novel framework that introduces a token-level single-sample mixing paradigm: within each sequence, the prefix is generated by a historical policy while the suffix is produced by the current policy, and on-policy and off-policy information are fused via token-level importance weighting. This enables fine-grained off-policy utilization, enhancing exploration while maintaining training stability. Experiments demonstrate that SOUP significantly outperforms standard on-policy and existing off-policy methods across multiple tasks, achieving superior final performance and exploration efficiency in large language model reinforcement learning.

Technology Category

Application Category

📝 Abstract
On-policy reinforcement learning (RL) methods widely used for language model post-training, like Group Relative Policy Optimization (GRPO), often suffer from limited exploration and early saturation due to low sampling diversity. While off-policy data can help, current approaches that mix entire trajectories cause significant policy mismatch and instability. In this work, we propose the $\textbf{S}$ingle-sample Mix-p$\textbf{O}$licy $\textbf{U}$nified $\textbf{P}$aradigm (SOUP), a framework that unifies off- and on-policy learning within individual samples at the token level. It confines off-policy influence to the prefix of a generated sequence sampled from historical policies, while the continuation is generated on-policy. Through token-level importance ratios, SOUP effectively leverages off-policy information while preserving training stability. Extensive experiments demonstrate that SOUP consistently outperforms standard on-policy training and existing off-policy extensions. Our further analysis clarifies how our fine-grained, single-sample mix-policy training can improve both exploration and final performance in LLM RL.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
large language models
on-policy
off-policy
policy mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

token-level mixing
single-sample policy blending
on-off-policy unification
importance weighting
LLM reinforcement learning
🔎 Similar Papers
No similar papers found.