LLM Collaboration With Multi-Agent Reinforcement Learning

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) lack effective multi-agent collaboration optimization, and mainstream fine-tuning approaches rely on labor-intensive, hand-crafted individual reward functions. Method: We formulate LLM collaboration as a cooperative multi-agent reinforcement learning (MARL) task and propose Multi-Agent Group Relative Policy Optimization (MAGRPO). MAGRPO introduces a group-relative policy optimization mechanism that eliminates explicit dependence on individual reward functions, enabling distributed policy updates and joint generation. It integrates LLM-based RL with MARL techniques within a multi-round interactive training framework. Results: Empirical evaluation on collaborative writing and programming tasks demonstrates that MAGRPO significantly improves both the efficiency and output quality of multi-LLM cooperation. It establishes a novel paradigm for scalable, reward-free multi-agent collaboration—bypassing manual reward engineering while supporting emergent coordination among heterogeneous LLM agents.

Technology Category

Application Category

📝 Abstract
A large amount of work has been done in Multi-Agent Systems (MAS) for modeling and solving problems with multiple interacting agents. However, most LLMs are pretrained independently and not specifically optimized for coordination. Existing LLM fine-tuning frameworks rely on individual rewards, which require complex reward designs for each agent to encourage collaboration. To address these challenges, we model LLM collaboration as a cooperative Multi-Agent Reinforcement Learning (MARL) problem. We develop a multi-agent, multi-turn algorithm, Multi-Agent Group Relative Policy Optimization (MAGRPO), to solve it, building on current RL approaches for LLMs as well as MARL techniques. Our experiments on LLM writing and coding collaboration demonstrate that fine-tuning MAS with MAGRPO enables agents to generate high-quality responses efficiently through effective cooperation. Our approach opens the door to using other MARL methods for LLMs and highlights the associated challenges.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLMs for multi-agent coordination challenges
Addressing complex reward designs in collaborative MAS
Enhancing LLM cooperation via MARL-based fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model LLM collaboration as MARL problem
Develop MAGRPO for multi-agent coordination
Fine-tune MAS with MAGRPO for cooperation
🔎 Similar Papers
No similar papers found.
S
Shuo Liu
Khoury College of Computer Sciences, Northeastern University, Boston, MA, 02115, USA
Z
Zeyu Liang
Khoury College of Computer Sciences, Northeastern University, Boston, MA, 02115, USA
Xueguang Lyu
Xueguang Lyu
Northeastern University
Reinforcement Learning
Christopher Amato
Christopher Amato
Associate Professor at Northeastern University
Artificial IntelligenceMulti-Agent SystemsMulti-Robot SystemsReinforcement Learning