🤖 AI Summary
This work addresses the limitations of existing large language model (LLM) collaboration approaches, which often rely on centralized protocols lacking deployment flexibility and employ Monte Carlo–based fine-tuning suffering from high variance and low sample efficiency. To overcome these challenges, the study introduces a multi-agent Actor-Critic framework for decentralized LLM collaboration, proposing two novel architectures: CoLLM-CC (Centralized Critic) and CoLLM-DC (Decentralized Critic). These designs enhance training stability and sample efficiency while revealing distinct suitability of critic structures across reward settings—specifically, long- versus short-horizon and sparse versus dense rewards. Experimental results demonstrate that CoLLM-CC significantly outperforms both Monte Carlo baselines and CoLLM-DC in long-horizon or sparse-reward tasks, whereas CoLLM-DC achieves comparable performance in short-horizon, dense-reward scenarios.
📝 Abstract
Recent work has explored optimizing LLM collaboration through Multi-Agent Reinforcement Learning (MARL). However, most MARL fine-tuning approaches rely on predefined execution protocols, which often require centralized execution. Decentralized LLM collaboration is more appealing in practice, as agents can run inference in parallel with flexible deployments. Also, current approaches use Monte Carlo methods for fine-tuning, which suffer from high variance and thus require more samples to train effectively. Actor-critic methods are prevalent in MARL for dealing with these issues, so we developed Multi-Agent Actor-Critic (MAAC) methods to optimize decentralized LLM collaboration. In this paper, we analyze when and why these MAAC methods are beneficial. We propose 2 MAAC approaches, \textbf{CoLLM-CC} with a \textbf{C}entralized \textbf{C}ritic and \textbf{CoLLM-DC} with \textbf{D}ecentralized \textbf{C}ritics. Our experiments across writing, coding, and game-playing domains show that Monte Carlo methods and CoLLM-DC can achieve performance comparable to CoLLM-CC in short-horizon and dense-reward settings. However, they both underperform CoLLM-CC on long-horizon or sparse-reward tasks, where Monte Carlo methods require substantially more samples and CoLLM-DC struggles to converge. Our code is available at https://github.com/OpenMLRL/CoMLRL/releases/tag/v1.3.2.