🤖 AI Summary
Large language model (LLM)-driven multi-agent systems (LaMAS) are inherently incompatible with conventional multi-agent reinforcement learning (MARL) methods due to fundamental architectural and optimization mismatches.
Method: This paper proposes Multi-Agent Reinforcement Fine-Tuning (MARFT), a novel paradigm tailored for LaMAS. We formally define the first systematic reinforcement fine-tuning framework specific to LaMAS, elucidating its essential distinctions from classical MARL in objective formulation, gradient propagation, and collaborative modeling. Our general, scalable MARFT algorithm integrates instruction tuning, multi-agent collaborative modeling, and LLM-specific inference mechanisms into an end-to-end differentiable training pipeline.
Contribution/Results: We open-source a complete implementation and empirically demonstrate substantial improvements in robustness and adaptability across complex agent-centric tasks—including scientific collaboration and automated content generation—thereby bridging the gap between LLM-based agents and principled reinforcement learning.
📝 Abstract
LLM-based Multi-Agent Systems have demonstrated remarkable capabilities in addressing complex, agentic tasks requiring multifaceted reasoning and collaboration, from generating high-quality presentation slides to conducting sophisticated scientific research. Meanwhile, RL has been widely recognized for its effectiveness in enhancing agent intelligence, but limited research has investigated the fine-tuning of LaMAS using foundational RL techniques. Moreover, the direct application of MARL methodologies to LaMAS introduces significant challenges, stemming from the unique characteristics and mechanisms inherent to LaMAS. To address these challenges, this article presents a comprehensive study of LLM-based MARL and proposes a novel paradigm termed Multi-Agent Reinforcement Fine-Tuning (MARFT). We introduce a universal algorithmic framework tailored for LaMAS, outlining the conceptual foundations, key distinctions, and practical implementation strategies. We begin by reviewing the evolution from RL to Reinforcement Fine-Tuning, setting the stage for a parallel analysis in the multi-agent domain. In the context of LaMAS, we elucidate critical differences between MARL and MARFT. These differences motivate a transition toward a novel, LaMAS-oriented formulation of RFT. Central to this work is the presentation of a robust and scalable MARFT framework. We detail the core algorithm and provide a complete, open-source implementation to facilitate adoption and further research. The latter sections of the paper explore real-world application perspectives and opening challenges in MARFT. By bridging theoretical underpinnings with practical methodologies, this work aims to serve as a roadmap for researchers seeking to advance MARFT toward resilient and adaptive solutions in agentic systems. Our implementation of the proposed framework is publicly available at: https://github.com/jwliao-ai/MARFT.