Group-in-Group Policy Optimization for LLM Agent Training

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the cross-step credit assignment challenge in training long-horizon LLM agents under sparse or delayed rewards, this paper proposes a critic-free, rollout-free two-level grouped advantage estimation framework. At the episode level, it introduces macro-advantage modeling to capture long-term objectives; at the step level, it constructs micro-advantages via anchor-state identification and cross-trajectory regrouping for fine-grained per-step reward attribution. This is the first approach within the group-based RL paradigm to enable scalable training of long-horizon agents, achieving both low memory overhead and enhanced training stability. Evaluated on ALFWorld and WebShop, it outperforms the GRPO baseline by 12.3% and 9.1%, respectively, without increasing GPU memory consumption or rollout cost.

Technology Category

Application Category

📝 Abstract
Recent advances in group-based reinforcement learning (RL) have driven frontier large language models (LLMs) in single-turn tasks like mathematical reasoning. However, their scalability to long-horizon LLM agent training remains limited. Unlike static tasks, agent-environment interactions unfold over many steps and often yield sparse or delayed rewards, making credit assignment across individual steps significantly more challenging. In this work, we propose Group-in-Group Policy Optimization (GiGPO), a novel RL algorithm that achieves fine-grained credit assignment for LLM agents while preserving the appealing properties of group-based RL: critic-free, low memory, and stable convergence. GiGPO introduces a two-level structure for estimating relative advantage: (i) At the episode-level, GiGPO computes macro relative advantages based on groups of complete trajectories; (ii) At the step-level, GiGPO introduces an anchor state grouping mechanism that retroactively constructs step-level groups by identifying repeated environment states across trajectories. Actions stemming from the same state are grouped together, enabling micro relative advantage estimation. This hierarchical structure effectively captures both global trajectory quality and local step effectiveness without relying on auxiliary models or additional rollouts. We evaluate GiGPO on two challenging agent benchmarks, ALFWorld and WebShop, using Qwen2.5-1.5B-Instruct and Qwen2.5-7B-Instruct. Crucially, GiGPO delivers fine-grained per-step credit signals and achieves performance gains of>12% on ALFWorld and>9% on WebShop over the GRPO baseline: all while maintaining the same GPU memory overhead, identical LLM rollout, and incurring little to no additional time cost.
Problem

Research questions and friction points this paper is trying to address.

Scalability of group-based RL for long-horizon LLM agent training
Fine-grained credit assignment in multi-step agent-environment interactions
Balancing performance gains with low memory and computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-level relative advantage estimation structure
Anchor state grouping for step-level credit
Critic-free, low memory, stable convergence
🔎 Similar Papers
No similar papers found.
Lang Feng
Lang Feng
Nanyang Technological University
Reinforcement Learning
Zhenghai Xue
Zhenghai Xue
Ph.D. student at Nanyang Technological University, Singapore
Artificial IntelligenceReinforcement Learning
T
Tingcong Liu
Nanyang Technological University, Singapore
B
Bo An
Nanyang Technological University, Singapore; Skywork AI, Singapore