Stratified GRPO: Handling Structural Heterogeneity in Reinforcement Learning of LLM Search Agents

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
During reinforcement learning training of LLM-based search agents, heterogeneous search trajectories—varying in invocation count, position, and outcome—induce cross-stratum bias when a single global baseline is used in standard policy gradient methods, distorting credit assignment and hindering multi-step policy exploration. Method: We introduce the novel concept of “cross-stratum bias” and propose Stratified Advantage Normalization (SAN): advantages are computed independently within each trajectory stratum and linearly fused with a global estimator, preserving global statistical properties while eliminating erroneous inter-stratum comparisons. SAN is integrated into the GRPO framework. Results: Our approach significantly improves training stability and sample efficiency. On multi-hop and single-hop question-answering benchmarks, it outperforms the original GRPO by 11.3 percentage points, achieves higher reward, exhibits more stable convergence, and induces superior search policies.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) agents increasingly rely on external tools such as search engines to solve complex, multi-step problems, and reinforcement learning (RL) has become a key paradigm for training them. However, the trajectories of search agents are structurally heterogeneous, where variations in the number, placement, and outcomes of search calls lead to fundamentally different answer directions and reward distributions. Standard policy gradient methods, which use a single global baseline, suffer from what we identify and formalize as cross-stratum bias-an"apples-to-oranges"comparison of heterogeneous trajectories. This cross-stratum bias distorts credit assignment and hinders exploration of complex, multi-step search strategies. To address this, we propose Stratified GRPO, whose central component, Stratified Advantage Normalization (SAN), partitions trajectories into homogeneous strata based on their structural properties and computes advantages locally within each stratum. This ensures that trajectories are evaluated only against their true peers. Our analysis proves that SAN eliminates cross-stratum bias, yields conditionally unbiased unit-variance estimates inside each stratum, and retains the global unbiasedness and unit-variance properties enjoyed by standard normalization, resulting in a more pure and scale-stable learning signal. To improve practical stability under finite-sample regimes, we further linearly blend SAN with the global estimator. Extensive experiments on diverse single-hop and multi-hop question-answering benchmarks demonstrate that Stratified GRPO consistently and substantially outperforms GRPO by up to 11.3 points, achieving higher training rewards, greater training stability, and more effective search policies. These results establish stratification as a principled remedy for structural heterogeneity in RL for LLM search agents.
Problem

Research questions and friction points this paper is trying to address.

Addresses structural heterogeneity in LLM search agent trajectories
Eliminates cross-stratum bias in reinforcement learning credit assignment
Improves training stability and search policy effectiveness through stratification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stratified GRPO partitions trajectories into homogeneous strata
SAN computes local advantages within each stratum
Blends local and global estimators for stability
🔎 Similar Papers
No similar papers found.