🤖 AI Summary
Existing large language models (LLMs) lack effective multi-agent collaboration optimization, and mainstream fine-tuning approaches rely on labor-intensive, hand-crafted individual reward functions.
Method: We formulate LLM collaboration as a cooperative multi-agent reinforcement learning (MARL) task and propose Multi-Agent Group Relative Policy Optimization (MAGRPO). MAGRPO introduces a group-relative policy optimization mechanism that eliminates explicit dependence on individual reward functions, enabling distributed policy updates and joint generation. It integrates LLM-based RL with MARL techniques within a multi-round interactive training framework.
Results: Empirical evaluation on collaborative writing and programming tasks demonstrates that MAGRPO significantly improves both the efficiency and output quality of multi-LLM cooperation. It establishes a novel paradigm for scalable, reward-free multi-agent collaboration—bypassing manual reward engineering while supporting emergent coordination among heterogeneous LLM agents.
📝 Abstract
A large amount of work has been done in Multi-Agent Systems (MAS) for modeling and solving problems with multiple interacting agents. However, most LLMs are pretrained independently and not specifically optimized for coordination. Existing LLM fine-tuning frameworks rely on individual rewards, which require complex reward designs for each agent to encourage collaboration. To address these challenges, we model LLM collaboration as a cooperative Multi-Agent Reinforcement Learning (MARL) problem. We develop a multi-agent, multi-turn algorithm, Multi-Agent Group Relative Policy Optimization (MAGRPO), to solve it, building on current RL approaches for LLMs as well as MARL techniques. Our experiments on LLM writing and coding collaboration demonstrate that fine-tuning MAS with MAGRPO enables agents to generate high-quality responses efficiently through effective cooperation. Our approach opens the door to using other MARL methods for LLMs and highlights the associated challenges.