LSPO: Length-aware Dynamic Sampling for Policy Optimization in LLM Reasoning

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low training efficiency and resource waste caused by excessively long responses in reinforcement learning (RL) for large language models (LLMs) on reasoning tasks, this paper proposes a length-aware dynamic data sampling method. The approach explicitly incorporates response length as a feedback signal into a meta-level RL framework, enabling adaptive sample selection during training—marking the first effort to model length information directly as input to the sampling policy, thereby avoiding blind sampling of overly long or invalid reasoning paths. Integrated within the RLVR (Reinforcement Learning with Verifiable Rewards) framework and evaluated across multiple models and datasets, it significantly improves training stability and convergence speed. Experiments demonstrate consistent performance gains on foundational models (e.g., Llama-2, Qwen) and reasoning benchmarks (e.g., GSM8K, MMLU). Ablation studies confirm that the proposed length-signal integration is both effective and generalizable.

Technology Category

Application Category

📝 Abstract
Since the release of Deepseek-R1, reinforcement learning with verifiable rewards (RLVR) has become a central approach for training large language models (LLMs) on reasoning tasks. Recent work has largely focused on modifying loss functions to make RLVR more efficient and effective. In this paper, motivated by studies of overthinking in LLMs, we propose Length-aware Sampling for Policy Optimization (LSPO), a novel meta-RLVR algorithm that dynamically selects training data at each step based on the average response length. We evaluate LSPO across multiple base models and datasets, demonstrating that it consistently improves learning effectiveness. In addition, we conduct a detailed ablation study to examine alternative ways of incorporating length signals into dynamic sampling, offering further insights and highlighting promising directions for future research.
Problem

Research questions and friction points this paper is trying to address.

Optimizes policy training using length-aware dynamic sampling
Addresses overthinking in large language model reasoning
Improves reinforcement learning efficiency with verifiable rewards
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic sampling based on response length
Meta-RLVR algorithm for policy optimization
Length-aware training data selection
🔎 Similar Papers
No similar papers found.