🤖 AI Summary
This work addresses the challenge of ineffective learning signals in multi-agent systems caused by high gradient variance and credit assignment difficulties due to heterogeneous task complexities. To mitigate these issues, the authors propose a Group Relative Policy Optimization mechanism that samples multiple communication graphs per task and evaluates edge importance based on intra-group relative performance. This approach constructs a group-relative advantage function to enable fine-grained credit assignment, effectively suppressing reward noise and stabilizing training. Moreover, it successfully identifies critical communication pathways often obscured by noise. Experimental results demonstrate that the proposed framework significantly outperforms existing methods on both reasoning and code generation benchmarks.
📝 Abstract
Optimizing communication topology is fundamental to the efficiency and effectiveness of Large Language Model (LLM)-based Multi-Agent Systems (MAS). While recent approaches utilize reinforcement learning to dynamically construct task-specific graphs, they typically rely on single-sample policy gradients with absolute rewards (e.g., binary correctness). This paradigm suffers from severe gradient variance and the credit assignment problem: simple queries yield non-informative positive rewards for suboptimal structures, while difficult queries often result in failures that provide no learning signal. To address these challenges, we propose Graph-GRPO, a novel topology optimization framework that integrates Group Relative Policy Optimization. Instead of evaluating a single topology in isolation, Graph-GRPO samples a group of diverse communication graphs for each query and computes the advantage of specific edges based on their relative performance within the group. By normalizing rewards across the sampled group, our method effectively mitigates the noise derived from task difficulty variance and enables fine-grained credit assignment. Extensive experiments on reasoning and code generation benchmarks demonstrate that Graph-GRPO significantly outperforms state-of-the-art baselines, achieving superior training stability and identifying critical communication pathways previously obscured by reward noise.