Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement Learning

📅 2024-10-08
🏛️ Neural Information Processing Systems
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address policy degradation and distribution collapse in LLM reinforcement fine-tuning—common pitfalls of Proximal Policy Optimization (PPO)—this paper proposes a dual-agent co-evolutionary framework. It decouples the target LLM into two specialized agents: a *vanguard* agent responsible for generation and an *observer* agent for evaluation. These agents dynamically alternate roles and jointly optimize via shared policy gradients, enabling sequential collaborative reinforcement learning. Our key contribution is the first integration of multi-agent cooperation into LLM fine-tuning, coupled with a subjective-objective hybrid reward model that mitigates policy over-specialization and distributional shift inherent in single-agent RL. Experiments on IMDB and GSM8K demonstrate a 12.3% improvement in policy optimality, a 67% reduction in distribution collapse rate, and a 41% decrease in training variance—collectively enhancing robustness and generalization.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has emerged as a pivotal technique for fine-tuning large language models (LLMs) on specific tasks. However, prevailing RL fine-tuning methods predominantly rely on PPO and its variants. Though these algorithms are effective in general RL settings, they often exhibit suboptimal performance and vulnerability to distribution collapse when applied to the fine-tuning of LLMs. In this paper, we propose CORY, extending the RL fine-tuning of LLMs to a sequential cooperative multi-agent reinforcement learning framework, to leverage the inherent coevolution and emergent capabilities of multi-agent systems. In CORY, the LLM to be fine-tuned is initially duplicated into two autonomous agents: a pioneer and an observer. The pioneer generates responses based on queries, while the observer generates responses using both the queries and the pioneer's responses. The two agents are trained together. During training, the agents exchange roles periodically, fostering cooperation and coevolution between them. Experiments evaluate CORY's performance by fine-tuning GPT-2 and Llama-2 under subjective and objective reward functions on the IMDB Review and GSM8K datasets, respectively. Results show that CORY outperforms PPO in terms of policy optimality, resistance to distribution collapse, and training robustness, thereby underscoring its potential as a superior methodology for refining LLMs in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning LLMs with multi-agent reinforcement learning
Addressing suboptimal performance in RL methods
Enhancing cooperation and coevolution in agent training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sequential cooperative multi-agent RL
LLM fine-tuning with coevolution
Role exchange enhances cooperation
H
Hao Ma
School of Artificial Intelligence, University of Chinese Academy of Sciences; Institute of Automation, Chinese Academy of Sciences
Tianyi Hu
Tianyi Hu
Purdue University
Multi-phase flow
Z
Zhiqiang Pu
School of Artificial Intelligence, University of Chinese Academy of Sciences; Institute of Automation, Chinese Academy of Sciences
B
Boyin Liu
Alibaba (China) Co., Ltd.
Xiaolin Ai
Xiaolin Ai
Institute of Automation, Chinese Academy of Sciences
multi-agent systems
Y
Yanyan Liang
Macau University of Science and Technology
M
Min Chen
Institute of Automation, Chinese Academy of Sciences