Maximum Entropy Heterogeneous-Agent Reinforcement Learning

📅 2023-06-19
🏛️ International Conference on Learning Representations
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
To address low sample efficiency, training instability, and convergence to suboptimal Nash equilibria in multi-agent reinforcement learning (MARL), this paper proposes a probabilistic graphical model framework grounded in the maximum entropy principle, enabling stochastic cooperative policy learning among heterogeneous agents. The method integrates maximum entropy RL, heterogeneous policy parameterization, and mirror descent optimization within a scalable distributed training architecture. Key contributions include: (1) the first heterogeneous soft actor-critic (HASAC) algorithm; and (2) a general maximum-entropy heterogeneous mirror learning (MEHAML) template, with theoretical guarantees on monotonic policy improvement and convergence to quantal response equilibria (QRE). Evaluated on six benchmarks—including Bi-DexHands—the approach consistently outperforms state-of-the-art methods, achieving significant gains in sample efficiency, robustness, and exploration adequacy.
📝 Abstract
Multi-agent reinforcement learning (MARL) has been shown effective for cooperative games in recent years. However, existing state-of-the-art methods face challenges related to sample complexity, training instability, and the risk of converging to a suboptimal Nash Equilibrium. In this paper, we propose a unified framework for learning stochastic policies to resolve these issues. We embed cooperative MARL problems into probabilistic graphical models, from which we derive the maximum entropy (MaxEnt) objective for MARL. Based on the MaxEnt framework, we propose Heterogeneous-Agent Soft Actor-Critic (HASAC) algorithm. Theoretically, we prove the monotonic improvement and convergence to quantal response equilibrium (QRE) properties of HASAC. Furthermore, we generalize a unified template for MaxEnt algorithmic design named Maximum Entropy Heterogeneous-Agent Mirror Learning (MEHAML), which provides any induced method with the same guarantees as HASAC. We evaluate HASAC on six benchmarks: Bi-DexHands, Multi-Agent MuJoCo, StarCraft Multi-Agent Challenge, Google Research Football, Multi-Agent Particle Environment, and Light Aircraft Game. Results show that HASAC consistently outperforms strong baselines, exhibiting better sample efficiency, robustness, and sufficient exploration. See our page at https://sites.google.com/view/meharl.
Problem

Research questions and friction points this paper is trying to address.

Addresses sample complexity in multi-agent reinforcement learning.
Resolves training instability and suboptimal Nash Equilibrium convergence.
Proposes a unified framework for stochastic policy learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Maximum Entropy framework for MARL
Heterogeneous-Agent Soft Actor-Critic algorithm
Unified MaxEnt algorithmic design template
🔎 Similar Papers
No similar papers found.
J
Jiarong Liu
Institute for AI, Peking University
Yifan Zhong
Yifan Zhong
Peking University
VLA ModelsDexterous ManipulationReinforcement Learning
Siyi Hu
Siyi Hu
Adelaide University
Generative AIReinforcement LearningMulti-Agent Systems
Haobo Fu
Haobo Fu
Tencent AI Lab, University of Birmingham
Reinforcement LearningEvolutionary Computation
Q
Qiang Fu
Tencent AI Lab
X
Xiaojun Chang
University of Technology Sydney
Y
Yaodong Yang
Institute for AI, Peking University