🤖 AI Summary
This work addresses the bias in advantage estimation arising from inconsistent historical contexts in stepwise grouped policy optimization. To mitigate this issue, the authors propose a hierarchical grouping mechanism that clusters each step within a trajectory into multiple levels based on contextual consistency, computes advantage values independently within each group, and aggregates them via adaptive weighting to yield more accurate stepwise advantage estimates. The method incurs no additional model or sampling overhead and effectively reduces bias induced by context inconsistency, substantially improving policy optimization performance in long-horizon tasks. Evaluated on the ALFWorld and WebShop benchmarks, agents built upon the Qwen2.5 series of large language models significantly outperform existing reinforcement learning approaches, achieving superior results under identical computational constraints.
📝 Abstract
Group-based reinforcement learning (RL), such as GRPO, has advanced the capabilities of large language models on long-horizon agentic tasks. To enable more fine-grained policy updates, recent research has increasingly shifted toward stepwise group-based policy optimization, which treats each step in a rollout trajectory independently while using a memory module to retain historical context. However, we find a key issue in estimating stepwise relative advantages, namely context inconsistency, where steps within the same group may differ in their historical contexts. Empirically, we reveal that this issue can lead to severely biased advantage estimation, thereby degrading policy optimization significantly. To address the issue, in this paper, we propose Hierarchy-of-Groups Policy Optimization (HGPO) for long-horizon agentic tasks. Specifically, within a group of rollout trajectories, HGPO assigns each step to multiple hierarchical groups according to the consistency of historical contexts. Then, for each step, HGPO computes distinct advantages within each group and aggregates them with an adaptive weighting scheme. In this way, HGPO can achieve a favorable bias-variance trade-off in stepwise advantage estimation, without extra models or rollouts. Evaluations on two challenging agentic tasks, ALFWorld and WebShop with Qwen2.5-1.5B-Instruct and Qwen2.5-7B-Instruct, show that HGPO significantly outperforms existing agentic RL methods under the same computational constraints. Code is available at https://github.com/langfengQ/verl-agent/tree/master/recipe/hgpo.