🤖 AI Summary
To address policy degradation and distribution collapse in LLM reinforcement fine-tuning—common pitfalls of Proximal Policy Optimization (PPO)—this paper proposes a dual-agent co-evolutionary framework. It decouples the target LLM into two specialized agents: a *vanguard* agent responsible for generation and an *observer* agent for evaluation. These agents dynamically alternate roles and jointly optimize via shared policy gradients, enabling sequential collaborative reinforcement learning. Our key contribution is the first integration of multi-agent cooperation into LLM fine-tuning, coupled with a subjective-objective hybrid reward model that mitigates policy over-specialization and distributional shift inherent in single-agent RL. Experiments on IMDB and GSM8K demonstrate a 12.3% improvement in policy optimality, a 67% reduction in distribution collapse rate, and a 41% decrease in training variance—collectively enhancing robustness and generalization.
📝 Abstract
Reinforcement learning (RL) has emerged as a pivotal technique for fine-tuning large language models (LLMs) on specific tasks. However, prevailing RL fine-tuning methods predominantly rely on PPO and its variants. Though these algorithms are effective in general RL settings, they often exhibit suboptimal performance and vulnerability to distribution collapse when applied to the fine-tuning of LLMs. In this paper, we propose CORY, extending the RL fine-tuning of LLMs to a sequential cooperative multi-agent reinforcement learning framework, to leverage the inherent coevolution and emergent capabilities of multi-agent systems. In CORY, the LLM to be fine-tuned is initially duplicated into two autonomous agents: a pioneer and an observer. The pioneer generates responses based on queries, while the observer generates responses using both the queries and the pioneer's responses. The two agents are trained together. During training, the agents exchange roles periodically, fostering cooperation and coevolution between them. Experiments evaluate CORY's performance by fine-tuning GPT-2 and Llama-2 under subjective and objective reward functions on the IMDB Review and GSM8K datasets, respectively. Results show that CORY outperforms PPO in terms of policy optimality, resistance to distribution collapse, and training robustness, thereby underscoring its potential as a superior methodology for refining LLMs in real-world applications.