SuperRL: Reinforcement Learning with Supervision to Boost Language Model Reasoning

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low sample efficiency and difficulty in leveraging high-quality offline reasoning trajectories in large language models (LLMs) for sparse-reward reasoning tasks, this paper proposes a hybrid training framework integrating offline supervision with online reinforcement learning (RL). Our method introduces: (1) an adaptive switching mechanism and a hybrid actor that dynamically balances online policy gradient updates and offline supervised signals; and (2) the first unified loss formulation that jointly optimizes policy gradient objectives and supervised learning targets, enabling efficient distillation of offline reasoning trajectories. Evaluated on challenging reasoning benchmarks—including GSM8K, MATH, and HotpotQA—our approach significantly improves sample efficiency (reducing training steps by 40% on average), generalization performance, and training stability under sparse rewards, consistently outperforming standard PPO and RLHF baselines.

Technology Category

Application Category

📝 Abstract
Large language models are increasingly used for complex reasoning tasks where high-quality offline data such as expert-annotated solutions and distilled reasoning traces are often available. However, in environments with sparse rewards, reinforcement learning struggles to sample successful trajectories, leading to inefficient learning. At the same time, these offline trajectories that represent correct reasoning paths are not utilized by standard on-policy reinforcement learning methods. To address this limitation, we propose SuperRL, a unified training framework that adaptively incorporates offline supervision into reinforcement learning. SuperRL introduces an Adaptive Switch to detect sparse reward conditions and activates a Hybrid Actor when necessary. The Hybrid Actor integrates policy gradient and supervised learning objectives at the loss level, enabling the model to benefit from accurate offline reasoning signals while maintaining the exploratory capacity of reinforcement learning. Experiments on a range of reasoning benchmarks show that SuperRL consistently outperforms standard reinforcement learning by improving sample efficiency, generalization, and robustness under sparse rewards.
Problem

Research questions and friction points this paper is trying to address.

Enhances reinforcement learning with supervision for language model reasoning
Utilizes offline data to improve learning in sparse reward environments
Combines policy gradient and supervised learning for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified training framework with offline supervision
Adaptive Switch detects sparse rewards
Hybrid Actor integrates policy and supervised learning
🔎 Similar Papers
No similar papers found.