MARFT: Multi-Agent Reinforcement Fine-Tuning

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language model (LLM)-driven multi-agent systems (LaMAS) are inherently incompatible with conventional multi-agent reinforcement learning (MARL) methods due to fundamental architectural and optimization mismatches. Method: This paper proposes Multi-Agent Reinforcement Fine-Tuning (MARFT), a novel paradigm tailored for LaMAS. We formally define the first systematic reinforcement fine-tuning framework specific to LaMAS, elucidating its essential distinctions from classical MARL in objective formulation, gradient propagation, and collaborative modeling. Our general, scalable MARFT algorithm integrates instruction tuning, multi-agent collaborative modeling, and LLM-specific inference mechanisms into an end-to-end differentiable training pipeline. Contribution/Results: We open-source a complete implementation and empirically demonstrate substantial improvements in robustness and adaptability across complex agent-centric tasks—including scientific collaboration and automated content generation—thereby bridging the gap between LLM-based agents and principled reinforcement learning.

Technology Category

Application Category

📝 Abstract
LLM-based Multi-Agent Systems have demonstrated remarkable capabilities in addressing complex, agentic tasks requiring multifaceted reasoning and collaboration, from generating high-quality presentation slides to conducting sophisticated scientific research. Meanwhile, RL has been widely recognized for its effectiveness in enhancing agent intelligence, but limited research has investigated the fine-tuning of LaMAS using foundational RL techniques. Moreover, the direct application of MARL methodologies to LaMAS introduces significant challenges, stemming from the unique characteristics and mechanisms inherent to LaMAS. To address these challenges, this article presents a comprehensive study of LLM-based MARL and proposes a novel paradigm termed Multi-Agent Reinforcement Fine-Tuning (MARFT). We introduce a universal algorithmic framework tailored for LaMAS, outlining the conceptual foundations, key distinctions, and practical implementation strategies. We begin by reviewing the evolution from RL to Reinforcement Fine-Tuning, setting the stage for a parallel analysis in the multi-agent domain. In the context of LaMAS, we elucidate critical differences between MARL and MARFT. These differences motivate a transition toward a novel, LaMAS-oriented formulation of RFT. Central to this work is the presentation of a robust and scalable MARFT framework. We detail the core algorithm and provide a complete, open-source implementation to facilitate adoption and further research. The latter sections of the paper explore real-world application perspectives and opening challenges in MARFT. By bridging theoretical underpinnings with practical methodologies, this work aims to serve as a roadmap for researchers seeking to advance MARFT toward resilient and adaptive solutions in agentic systems. Our implementation of the proposed framework is publicly available at: https://github.com/jwliao-ai/MARFT.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning LLM-based multi-agent systems using RL techniques
Addressing challenges in applying MARL to LaMAS effectively
Developing scalable MARFT framework for adaptive agentic systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Agent Reinforcement Fine-Tuning (MARFT) paradigm
Universal algorithmic framework for LaMAS
Open-source implementation for MARFT
🔎 Similar Papers
No similar papers found.
J
Junwei Liao
Shanghai Jiao Tong University, Shanghai Innovation Institute, Xi’an Jiaotong University
Muning Wen
Muning Wen
Research Assistant Professor, Shanghai Jiao Tong University
(multi-agent) reinforcement learninglanguage agent/LLM-based agent
J
Jun Wang
OPPO Research Institute
W
Weinan Zhang
Shanghai Jiao Tong University