🤖 AI Summary
This work addresses the multi-agent zero-shot coordination (ZSC) problem by proposing the Cross-Environment Coordination (CEC) paradigm: a single partner policy is trained via multi-task distribution learning across billion-scale procedurally generated heterogeneous cooperative environments, enabling agents to acquire generalizable coordination norms without fine-tuning—thus achieving rapid coordination with unseen partners and tasks. The method leverages the JAX-based reinforcement learning framework, integrated with procedural environment generation and a rigorous zero-shot transfer evaluation protocol. To our knowledge, this is the first fully human-data-free approach to universal cooperative intelligence. It substantially outperforms existing baselines across quantitative metrics, qualitative analysis, and real-world human–agent collaboration, empirically validating both the efficacy and strong cross-task, cross-partner transferability of learned universal coordination norms.
📝 Abstract
Zero-shot coordination (ZSC), the ability to adapt to a new partner in a cooperative task, is a critical component of human-compatible AI. While prior work has focused on training agents to cooperate on a single task, these specialized models do not generalize to new tasks, even if they are highly similar. Here, we study how reinforcement learning on a distribution of environments with a single partner enables learning general cooperative skills that support ZSC with many new partners on many new problems. We introduce two Jax-based, procedural generators that create billions of solvable coordination challenges. We develop a new paradigm called Cross-Environment Cooperation (CEC), and show that it outperforms competitive baselines quantitatively and qualitatively when collaborating with real people. Our findings suggest that learning to collaborate across many unique scenarios encourages agents to develop general norms, which prove effective for collaboration with different partners. Together, our results suggest a new route toward designing generalist cooperative agents capable of interacting with humans without requiring human data.