🤖 AI Summary
During reinforcement learning training of LLM-based search agents, heterogeneous search trajectories—varying in invocation count, position, and outcome—induce cross-stratum bias when a single global baseline is used in standard policy gradient methods, distorting credit assignment and hindering multi-step policy exploration.
Method: We introduce the novel concept of “cross-stratum bias” and propose Stratified Advantage Normalization (SAN): advantages are computed independently within each trajectory stratum and linearly fused with a global estimator, preserving global statistical properties while eliminating erroneous inter-stratum comparisons. SAN is integrated into the GRPO framework.
Results: Our approach significantly improves training stability and sample efficiency. On multi-hop and single-hop question-answering benchmarks, it outperforms the original GRPO by 11.3 percentage points, achieves higher reward, exhibits more stable convergence, and induces superior search policies.
📝 Abstract
Large language model (LLM) agents increasingly rely on external tools such as search engines to solve complex, multi-step problems, and reinforcement learning (RL) has become a key paradigm for training them. However, the trajectories of search agents are structurally heterogeneous, where variations in the number, placement, and outcomes of search calls lead to fundamentally different answer directions and reward distributions. Standard policy gradient methods, which use a single global baseline, suffer from what we identify and formalize as cross-stratum bias-an"apples-to-oranges"comparison of heterogeneous trajectories. This cross-stratum bias distorts credit assignment and hinders exploration of complex, multi-step search strategies. To address this, we propose Stratified GRPO, whose central component, Stratified Advantage Normalization (SAN), partitions trajectories into homogeneous strata based on their structural properties and computes advantages locally within each stratum. This ensures that trajectories are evaluated only against their true peers. Our analysis proves that SAN eliminates cross-stratum bias, yields conditionally unbiased unit-variance estimates inside each stratum, and retains the global unbiasedness and unit-variance properties enjoyed by standard normalization, resulting in a more pure and scale-stable learning signal. To improve practical stability under finite-sample regimes, we further linearly blend SAN with the global estimator. Extensive experiments on diverse single-hop and multi-hop question-answering benchmarks demonstrate that Stratified GRPO consistently and substantially outperforms GRPO by up to 11.3 points, achieving higher training rewards, greater training stability, and more effective search policies. These results establish stratification as a principled remedy for structural heterogeneity in RL for LLM search agents.