JoyAgents-R1: Joint Evolution Dynamics for Versatile Multi-LLM Agents with Reinforcement Learning

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In heterogeneous multi-agent reinforcement learning, joint evolutionary training suffers from low cooperation efficiency and training instability. To address these challenges, this paper proposes JoyAgents-R1—a novel framework for collaborative training of multiple large language model (LLM) agents. Its core contributions include: (i) the first integration of Group Relative Policy Optimization (GRPO) into multi-LLM agent training; (ii) node-level Monte Carlo sampling for decentralized action exploration; (iii) marginal-reward-driven agent grouping selection; (iv) a dynamic group selection mechanism guided by reward volatility; and (v) an adaptive memory evolution strategy leveraging reward reuse to jointly optimize policy updates and long-term memory retention. Empirical results demonstrate that JoyAgents-R1, using only small open-source LLMs, achieves collaboration performance comparable to that of large proprietary models across both general and domain-specific tasks, while significantly improving convergence speed and training stability.

Technology Category

Application Category

📝 Abstract
Multi-agent reinforcement learning (MARL) has emerged as a prominent paradigm for increasingly complex tasks. However, joint evolution across heterogeneous agents remains challenging due to cooperative inefficiency and training instability. In this paper, we propose the joint evolution dynamics for MARL called JoyAgents-R1, which first applies Group Relative Policy Optimization (GRPO) to the joint training of heterogeneous multi-agents. By iteratively refining agents' large language models (LLMs) and memories, the method achieves holistic equilibrium with optimal decision-making and memory capabilities. Specifically, JoyAgents-R1 first implements node-wise Monte Carlo sampling on the behavior of each agent across entire reasoning trajectories to enhance GRPO sampling efficiency while maintaining policy diversity. Then, our marginal benefit-driven selection strategy identifies top-$K$ sampling groups with maximal reward fluctuations, enabling targeted agent model updates that improve training stability and maximize joint benefits through cost-effective parameter adjustments. Meanwhile, JoyAgents-R1 introduces an adaptive memory evolution mechanism that repurposes GRPO rewards as cost-free supervisory signals to eliminate repetitive reasoning and accelerate convergence. Experiments across general and domain-specific scenarios demonstrate that JoyAgents-R1 achieves performance comparable to that of larger LLMs while built on smaller open-source models.
Problem

Research questions and friction points this paper is trying to address.

Enhance cooperative efficiency in multi-agent reinforcement learning
Improve training stability for heterogeneous agents
Optimize decision-making and memory capabilities in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group Relative Policy Optimization for heterogeneous agents
Node-wise Monte Carlo sampling for reasoning trajectories
Adaptive memory evolution with GRPO rewards
🔎 Similar Papers
No similar papers found.
A
Ai Han
JD.com, Beijing, China
Junxing Hu
Junxing Hu
University of Chinese Academy of Sciences
AI AgentComputer Vision3D VisionBiometrics
P
Pu Wei
JD.com, Beijing, China
Z
Zhiqian Zhang
JD.com, Beijing, China
Y
Yuhang Guo
JD.com, Beijing, China
J
Jiawei Lu
JD.com, Beijing, China
Z
Zicheng Zhang
JD.com, Beijing, China