🤖 AI Summary
This work addresses the challenge of zero-shot cooperative generalization for AI agents in human–agent collaboration, specifically when encountering novel human partners and unseen levels. To overcome the limitations of conventional same-level training-and-evaluation paradigms, we introduce minimax—the first benchmark explicitly designed for cross-partner and cross-level cooperative generalization—built upon the Overcooked-AI environment and uniquely tailored for dual-curriculum design (DCD). The benchmark features a GPU-accelerated architecture enabling automated curriculum generation and efficient zero-shot collaboration evaluation. Empirical results reveal substantial performance bottlenecks in existing DCD algorithms and mainstream neural architectures on this generalization task. Our contribution is twofold: (1) a standardized, reproducible evaluation platform for cooperative generalization, and (2) a foundational step toward generalization-centric research in human–agent collaboration.
📝 Abstract
We introduce the Overcooked Generalisation Challenge (OGC) - the first benchmark to study agents' zero-shot cooperation abilities when faced with novel partners and levels in the Overcooked-AI environment. This perspective starkly contrasts a large body of previous work that has trained and evaluated cooperating agents only on the same level, failing to capture generalisation abilities required for real-world human-AI cooperation. Our challenge interfaces with state-of-the-art dual curriculum design (DCD) methods to generate auto-curricula for training general agents in Overcooked. It is the first cooperative multi-agent environment specially designed for DCD methods and, consequently, the first benchmarked with state-of-the-art methods. It is fully GPU-accelerated, built on the DCD benchmark suite minimax, and freely available under an open-source license: https://git.hcics.simtech.uni-stuttgart.de/public-projects/OGC. We show that current DCD algorithms struggle to produce useful policies in this novel challenge, even if combined with recent network architectures that were designed for scalability and generalisability. The OGC pushes the boundaries of real-world human-AI cooperation by enabling the research community to study the impact of generalisation on cooperating agents.