🤖 AI Summary
To address low sample efficiency and training instability in multi-agent reinforcement learning (MARL), this paper proposes MARPO, a reflective policy optimization framework. Methodologically, MARPO introduces two key innovations: (1) a trajectory-backtracking reflection mechanism that leverages high-return future trajectories to retroactively strengthen current policy updates, significantly improving sample reuse; and (2) a KL-divergence-guided dynamic asymmetric clipping strategy that adaptively modulates policy update steps, enhancing robustness while ensuring convergence. Empirically, MARPO achieves superior final performance with fewer environment interactions on standard benchmarks—including StarCraft II and the Multi-Agent Particle Environment (MPE)—and consistently outperforms state-of-the-art baselines such as MAPPO and QMix. The framework establishes a new paradigm for efficient and stable MARL training.
📝 Abstract
We propose Multi Agent Reflective Policy Optimization (MARPO) to alleviate the issue of sample inefficiency in multi agent reinforcement learning. MARPO consists of two key components: a reflection mechanism that leverages subsequent trajectories to enhance sample efficiency, and an asymmetric clipping mechanism that is derived from the KL divergence and dynamically adjusts the clipping range to improve training stability. We evaluate MARPO in classic multi agent environments, where it consistently outperforms other methods.