🤖 AI Summary
Existing multi-agent frameworks rely on implicit collaboration capabilities acquired during large language model (LLM) pretraining, leading to poor generalizability and optimizability of collaborative behavior. This paper proposes the first end-to-end trainable two-agent collaborative framework, explicitly modeling collaboration as a learnable process: an Actor-Agent executes tasks, while a Critic-Agent specializes in evaluating collaboration quality and providing feedback—marking the first application of the Actor-Critic paradigm to multi-LLM collaboration. Our method integrates LLM supervised fine-tuning, dialogue policy gradient optimization, and collaboration-aware reward modeling, enabling joint optimization of collaborative policies via multi-turn trajectory sampling. Evaluated on multiple benchmarks, our approach significantly outperforms state-of-the-art methods, achieving consistent improvements in task completion rate, response quality, and collaboration stability.
📝 Abstract
Large language models (LLMs) have demonstrated a remarkable ability to serve as general-purpose tools for various language-based tasks. Recent works have demonstrated that the efficacy of such models can be improved through iterative dialog between multiple models. While these paradigms show promise in improving model efficacy, most works in this area treat collaboration as an emergent behavior, rather than a learned behavior. In doing so, current multi-agent frameworks rely on collaborative behaviors to have been sufficiently trained into off-the-shelf models. To address this limitation, we propose ACC-Collab, an Actor-Critic based learning framework to produce a two-agent team (an actor-agent and a critic-agent) specialized in collaboration. We demonstrate that ACC-Collab outperforms SotA multi-agent techniques on a wide array of benchmarks.