🤖 AI Summary
In multi-agent reinforcement learning (MARL), the exponential growth of the joint state-action space impedes efficient exploration, while high-quality collaborative expert demonstrations are often impractical to obtain. Method: This paper proposes the Personalized Expert Demonstration (PED) paradigm, wherein each agent receives only single-agent demonstrations focused on its individual objective—eliminating reliance on cooperative demonstrations. We introduce a novel dual-discriminator architecture: a behavior-alignment discriminator enforcing policy fidelity to individual demonstrations, and a goal-oriented discriminator incentivizing task completion. The framework integrates inverse reinforcement learning with adversarial reward shaping and supports both discrete and continuous action spaces. Contribution/Results: PED significantly outperforms state-of-the-art methods across diverse coordination benchmarks; exhibits strong robustness to suboptimal demonstrations; and successfully transfers non-cooperative policies to achieve rapid convergence in StarCraft II micromanagement tasks.
📝 Abstract
Multi-Agent Reinforcement Learning (MARL) algorithms face the challenge of efficient exploration due to the exponential increase in the size of the joint state-action space. While demonstration-guided learning has proven beneficial in single-agent settings, its direct applicability to MARL is hindered by the practical difficulty of obtaining joint expert demonstrations. In this work, we introduce a novel concept of personalized expert demonstrations, tailored for each individual agent or, more broadly, each individual type of agent within a heterogeneous team. These demonstrations solely pertain to single-agent behaviors and how each agent can achieve personal goals without encompassing any cooperative elements, thus naively imitating them will not achieve cooperation due to potential conflicts. To this end, we propose an approach that selectively utilizes personalized expert demonstrations as guidance and allows agents to learn to cooperate, namely personalized expert-guided MARL (PegMARL). This algorithm utilizes two discriminators: the first provides incentives based on the alignment of individual agent behavior with demonstrations, and the second regulates incentives based on whether the behaviors lead to the desired outcome. We evaluate PegMARL using personalized demonstrations in both discrete and continuous environments. The experimental results demonstrate that PegMARL outperforms state-of-the-art MARL algorithms in solving coordinated tasks, achieving strong performance even when provided with suboptimal personalized demonstrations. We also showcase PegMARL's capability of leveraging joint demonstrations in the StarCraft scenario and converging effectively even with demonstrations from non-co-trained policies.