Unbiased Dynamic Pruning for Efficient Group-Based Policy Optimization

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of Group Relative Policy Optimization (GRPO), which stems from extensive grouped sampling, and the estimation bias introduced by existing selective data utilization methods that compromise theoretical guarantees and convergence. To accelerate training while preserving unbiased gradient estimation, the authors propose a dynamic pruning strategy grounded in importance sampling and a theoretically derived rescaling factor that maintains the original optimization objective—marking the first achievement of unbiased gradient estimation under dynamic pruning. Additionally, they introduce a window-greedy Dense Prompt Packing mechanism to mitigate data sparsity induced by pruning and enhance hardware utilization. Evaluated on the Qwen3-4B model, the method achieves a 2.37× training speedup and outperforms GRPO by an average of 3.36% in accuracy across six mathematical reasoning benchmarks.

Technology Category

Application Category

📝 Abstract
Group Relative Policy Optimization (GRPO) effectively scales LLM reasoning but incurs prohibitive computational costs due to its extensive group-based sampling requirement. While recent selective data utilization methods can mitigate this overhead, they could induce estimation bias by altering the underlying sampling distribution, compromising theoretical rigor and convergence behavior. To address this limitation, we propose Dynamic Pruning Policy Optimization (DPPO), a framework that enables dynamic pruning while preserving unbiased gradient estimation through importance sampling-based correction. By incorporating mathematically derived rescaling factors, DPPO significantly accelerates GRPO training without altering the optimization objective of the full-batch baseline. Furthermore, to mitigate the data sparsity induced by pruning, we introduce Dense Prompt Packing, a window-based greedy strategy that maximizes valid token density and hardware utilization. Extensive experiments demonstrate that DPPO consistently accelerates training across diverse models and benchmarks. For instance, on Qwen3-4B trained on MATH, DPPO achieves 2.37$\times$ training speedup and outperforms GRPO by 3.36% in average accuracy across six mathematical reasoning benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Group Relative Policy Optimization
computational cost
estimation bias
sampling distribution
policy optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Pruning
Unbiased Gradient Estimation
Importance Sampling
Dense Prompt Packing
Group-Based Policy Optimization
🔎 Similar Papers
No similar papers found.