Zero-Shot Coordination in Ad Hoc Teams with Generalized Policy Improvement and Difference Rewards

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the ad hoc teaming problem in multi-agent systems, where agents must collaborate zero-shot with previously unseen teammates. We propose GPAT—a novel algorithm grounded in the ad hoc multi-agent Markov decision process framework—that enables cross-policy knowledge transfer via Generalized Policy Advancement and a Disagreement-aware Reward mechanism. GPAT requires no prior information about teammates or joint training; it achieves immediate coordination solely from a pre-trained heterogeneous policy library. Extensive experiments across cooperative foraging, predator-prey, Overcooked simulation, and a real-world multi-robot platform demonstrate that GPAT consistently outperforms existing zero-shot collaboration methods, achieving significant improvements in task success rate, team adaptability, and generalization. The approach provides a scalable coordination paradigm for open, dynamic multi-agent environments.

Technology Category

Application Category

📝 Abstract
Real-world multi-agent systems may require ad hoc teaming, where an agent must coordinate with other previously unseen teammates to solve a task in a zero-shot manner. Prior work often either selects a pretrained policy based on an inferred model of the new teammates or pretrains a single policy that is robust to potential teammates. Instead, we propose to leverage all pretrained policies in a zero-shot transfer setting. We formalize this problem as an ad hoc multi-agent Markov decision process and present a solution that uses two key ideas, generalized policy improvement and difference rewards, for efficient and effective knowledge transfer between different teams. We empirically demonstrate that our algorithm, Generalized Policy improvement for Ad hoc Teaming (GPAT), successfully enables zero-shot transfer to new teams in three simulated environments: cooperative foraging, predator-prey, and Overcooked. We also demonstrate our algorithm in a real-world multi-robot setting.
Problem

Research questions and friction points this paper is trying to address.

Enabling zero-shot coordination with unseen teammates
Transferring knowledge across teams using policy improvement
Solving ad hoc multi-agent tasks without prior coordination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages all pretrained policies for zero-shot transfer
Uses generalized policy improvement for knowledge transfer
Applies difference rewards to enhance coordination efficiency
🔎 Similar Papers
No similar papers found.
R
Rupal Nigam
The Grainger College of Engineering, University of Illinois Urbana-Champaign, Champaign, IL 61820
N
Niket Parikh
The Grainger College of Engineering, University of Illinois Urbana-Champaign, Champaign, IL 61820
Hamid Osooli
Hamid Osooli
University of Illinois at Urbana-Champaign
RoboticsMachine LearningReinforcement LearningGame Theory
M
Mikihisa Yuasa
The Grainger College of Engineering, University of Illinois Urbana-Champaign, Champaign, IL 61820
J
Jacob Heglund
The Grainger College of Engineering, University of Illinois Urbana-Champaign, Champaign, IL 61820
H
Huy T. Tran
The Grainger College of Engineering, University of Illinois Urbana-Champaign, Champaign, IL 61820