🤖 AI Summary
In heterogeneous multi-agent reinforcement learning, joint evolutionary training suffers from low cooperation efficiency and training instability. To address these challenges, this paper proposes JoyAgents-R1—a novel framework for collaborative training of multiple large language model (LLM) agents. Its core contributions include: (i) the first integration of Group Relative Policy Optimization (GRPO) into multi-LLM agent training; (ii) node-level Monte Carlo sampling for decentralized action exploration; (iii) marginal-reward-driven agent grouping selection; (iv) a dynamic group selection mechanism guided by reward volatility; and (v) an adaptive memory evolution strategy leveraging reward reuse to jointly optimize policy updates and long-term memory retention. Empirical results demonstrate that JoyAgents-R1, using only small open-source LLMs, achieves collaboration performance comparable to that of large proprietary models across both general and domain-specific tasks, while significantly improving convergence speed and training stability.
📝 Abstract
Multi-agent reinforcement learning (MARL) has emerged as a prominent paradigm for increasingly complex tasks. However, joint evolution across heterogeneous agents remains challenging due to cooperative inefficiency and training instability. In this paper, we propose the joint evolution dynamics for MARL called JoyAgents-R1, which first applies Group Relative Policy Optimization (GRPO) to the joint training of heterogeneous multi-agents. By iteratively refining agents' large language models (LLMs) and memories, the method achieves holistic equilibrium with optimal decision-making and memory capabilities. Specifically, JoyAgents-R1 first implements node-wise Monte Carlo sampling on the behavior of each agent across entire reasoning trajectories to enhance GRPO sampling efficiency while maintaining policy diversity. Then, our marginal benefit-driven selection strategy identifies top-$K$ sampling groups with maximal reward fluctuations, enabling targeted agent model updates that improve training stability and maximize joint benefits through cost-effective parameter adjustments. Meanwhile, JoyAgents-R1 introduces an adaptive memory evolution mechanism that repurposes GRPO rewards as cost-free supervisory signals to eliminate repetitive reasoning and accelerate convergence. Experiments across general and domain-specific scenarios demonstrate that JoyAgents-R1 achieves performance comparable to that of larger LLMs while built on smaller open-source models.