🤖 AI Summary
To address the cross-step credit assignment challenge in training long-horizon LLM agents under sparse or delayed rewards, this paper proposes a critic-free, rollout-free two-level grouped advantage estimation framework. At the episode level, it introduces macro-advantage modeling to capture long-term objectives; at the step level, it constructs micro-advantages via anchor-state identification and cross-trajectory regrouping for fine-grained per-step reward attribution. This is the first approach within the group-based RL paradigm to enable scalable training of long-horizon agents, achieving both low memory overhead and enhanced training stability. Evaluated on ALFWorld and WebShop, it outperforms the GRPO baseline by 12.3% and 9.1%, respectively, without increasing GPU memory consumption or rollout cost.
📝 Abstract
Recent advances in group-based reinforcement learning (RL) have driven frontier large language models (LLMs) in single-turn tasks like mathematical reasoning. However, their scalability to long-horizon LLM agent training remains limited. Unlike static tasks, agent-environment interactions unfold over many steps and often yield sparse or delayed rewards, making credit assignment across individual steps significantly more challenging. In this work, we propose Group-in-Group Policy Optimization (GiGPO), a novel RL algorithm that achieves fine-grained credit assignment for LLM agents while preserving the appealing properties of group-based RL: critic-free, low memory, and stable convergence. GiGPO introduces a two-level structure for estimating relative advantage: (i) At the episode-level, GiGPO computes macro relative advantages based on groups of complete trajectories; (ii) At the step-level, GiGPO introduces an anchor state grouping mechanism that retroactively constructs step-level groups by identifying repeated environment states across trajectories. Actions stemming from the same state are grouped together, enabling micro relative advantage estimation. This hierarchical structure effectively captures both global trajectory quality and local step effectiveness without relying on auxiliary models or additional rollouts. We evaluate GiGPO on two challenging agent benchmarks, ALFWorld and WebShop, using Qwen2.5-1.5B-Instruct and Qwen2.5-7B-Instruct. Crucially, GiGPO delivers fine-grained per-step credit signals and achieves performance gains of>12% on ALFWorld and>9% on WebShop over the GRPO baseline: all while maintaining the same GPU memory overhead, identical LLM rollout, and incurring little to no additional time cost.