MARPO: A Reflective Policy Optimization for Multi Agent Reinforcement Learning

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low sample efficiency and training instability in multi-agent reinforcement learning (MARL), this paper proposes MARPO, a reflective policy optimization framework. Methodologically, MARPO introduces two key innovations: (1) a trajectory-backtracking reflection mechanism that leverages high-return future trajectories to retroactively strengthen current policy updates, significantly improving sample reuse; and (2) a KL-divergence-guided dynamic asymmetric clipping strategy that adaptively modulates policy update steps, enhancing robustness while ensuring convergence. Empirically, MARPO achieves superior final performance with fewer environment interactions on standard benchmarks—including StarCraft II and the Multi-Agent Particle Environment (MPE)—and consistently outperforms state-of-the-art baselines such as MAPPO and QMix. The framework establishes a new paradigm for efficient and stable MARL training.

Technology Category

Application Category

📝 Abstract
We propose Multi Agent Reflective Policy Optimization (MARPO) to alleviate the issue of sample inefficiency in multi agent reinforcement learning. MARPO consists of two key components: a reflection mechanism that leverages subsequent trajectories to enhance sample efficiency, and an asymmetric clipping mechanism that is derived from the KL divergence and dynamically adjusts the clipping range to improve training stability. We evaluate MARPO in classic multi agent environments, where it consistently outperforms other methods.
Problem

Research questions and friction points this paper is trying to address.

Alleviates sample inefficiency in multi-agent reinforcement learning
Enhances sample efficiency using a reflection mechanism
Improves training stability with an asymmetric clipping mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reflection mechanism using subsequent trajectories
Asymmetric clipping from KL divergence
Dynamic adjustment of clipping range
🔎 Similar Papers
No similar papers found.
C
Cuiling Wu
School of Computer Science and Technology, Beijing Institute of Technology
Yaozhong Gan
Yaozhong Gan
Nanjing University of Aeronautics and Astronautics, China
Reinforcement learning
J
Junliang Xing
QiYuan Lab
Y
Ying Fu
School of Computer Science and Technology, Beijing Institute of Technology