Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the multi-agent zero-shot coordination (ZSC) problem by proposing the Cross-Environment Coordination (CEC) paradigm: a single partner policy is trained via multi-task distribution learning across billion-scale procedurally generated heterogeneous cooperative environments, enabling agents to acquire generalizable coordination norms without fine-tuning—thus achieving rapid coordination with unseen partners and tasks. The method leverages the JAX-based reinforcement learning framework, integrated with procedural environment generation and a rigorous zero-shot transfer evaluation protocol. To our knowledge, this is the first fully human-data-free approach to universal cooperative intelligence. It substantially outperforms existing baselines across quantitative metrics, qualitative analysis, and real-world human–agent collaboration, empirically validating both the efficacy and strong cross-task, cross-partner transferability of learned universal coordination norms.

Technology Category

Application Category

📝 Abstract
Zero-shot coordination (ZSC), the ability to adapt to a new partner in a cooperative task, is a critical component of human-compatible AI. While prior work has focused on training agents to cooperate on a single task, these specialized models do not generalize to new tasks, even if they are highly similar. Here, we study how reinforcement learning on a distribution of environments with a single partner enables learning general cooperative skills that support ZSC with many new partners on many new problems. We introduce two Jax-based, procedural generators that create billions of solvable coordination challenges. We develop a new paradigm called Cross-Environment Cooperation (CEC), and show that it outperforms competitive baselines quantitatively and qualitatively when collaborating with real people. Our findings suggest that learning to collaborate across many unique scenarios encourages agents to develop general norms, which prove effective for collaboration with different partners. Together, our results suggest a new route toward designing generalist cooperative agents capable of interacting with humans without requiring human data.
Problem

Research questions and friction points this paper is trying to address.

Enabling zero-shot coordination with new partners in cooperative tasks
Generalizing cooperative skills across diverse environments and problems
Developing AI agents that collaborate effectively with humans without human data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Environment Cooperation (CEC) paradigm
Jax-based procedural generators for challenges
Reinforcement learning across diverse environments
🔎 Similar Papers
No similar papers found.
K
Kunal Jha
Department of Computer Science, University of Washington, Seattle, WA
Wilka Carvalho
Wilka Carvalho
Harvard University
cognitive sciencereinforcement learningdeep learning
Y
Yancheng Liang
Department of Computer Science, University of Washington, Seattle, WA
S
Simon S. Du
Department of Computer Science, University of Washington, Seattle, WA
Natasha Jaques
Natasha Jaques
University of Washington, Google Research
Social reinforcement learningMachine learningdeep learningmulti-agenthuman-AI interaction