🤖 AI Summary
This work addresses the instability in training multi-agent large language models (LLMs) with reinforcement learning, which arises from the incompatibility of global advantage normalization with heterogeneous reward distributions across agents. The authors propose Dr. MAS, a novel framework that, for the first time, uncovers how global normalization induces gradient norm instability and introduces an agent-level advantage normalization strategy to calibrate gradient scales. Dr. MAS integrates support for heterogeneous agent models, shared LLM inference scheduling, and configurable optimization strategies into an end-to-end scalable architecture. Experiments demonstrate significant performance gains over baselines, with average improvements of 5.6% and 15.2% on mathematical reasoning and multi-turn search tasks, respectively, while effectively suppressing gradient spikes and enabling stable, efficient training.
📝 Abstract
Multi-agent LLM systems enable advanced reasoning and tool use via role specialization, yet reliable reinforcement learning (RL) post-training for such systems remains difficult. In this work, we theoretically pinpoint a key reason for training instability when extending group-based RL to multi-agent LLM systems. We show that under GRPO-style optimization, a global normalization baseline may deviate from diverse agents'reward distributions, which ultimately leads to gradient-norm instability. Based on this finding, we propose Dr. MAS, a simple and stable RL training recipe for multi-agent LLM systems. Dr. MAS uses an agent-wise remedy: normalizing advantages per agent using each agent's own reward statistics, which calibrates gradient scales and dramatically stabilizes training, both theoretically and empirically. Beyond the algorithm, Dr. MAS provides an end-to-end RL training framework for multi-agent LLM systems, supporting scalable orchestration, flexible per-agent LLM serving and optimization configs, and shared resource scheduling of LLM actor backends. We evaluate Dr. MAS on multi-agent math reasoning and multi-turn search benchmarks using Qwen2.5 and Qwen3 series models. Dr. MAS achieves clear gains over vanilla GRPO (e.g., +5.6\% avg@16 and +4.6\% pass@16 on math, and +15.2\% avg@16 and +13.1\% pass@16 on search) while largely eliminating gradient spikes. Moreover, it remains highly effective under heterogeneous agent-model assignments while improving efficiency.