π€ AI Summary
This work addresses the challenge of improving AI agentsβ generalization to unseen human collaborators in cooperative tasks, where conventional cooperative adversarial training often suffers from self-destructive equilibria and poor adaptability to heterogeneous human behavior. To this end, we propose GOAT (Generative Online Adversarial Training): a framework that freezes the backbone parameters of a generative model (e.g., diffusion models or VAEs) and optimizes only task-specific policy embeddings. GOAT integrates generative modeling with online adversarial training, performing end-to-end learning to maximize cooperative style regret. The method supports human-in-the-loop evaluation and significantly improves zero-shot adaptation to novel human partners. Evaluated on the Overcooked benchmark, GOAT achieves state-of-the-art performance. Crucially, real-human user studies confirm its robustness and strong generalization across diverse human behavioral patterns, establishing a new paradigm for trustworthy human-AI collaboration.
π Abstract
Being able to cooperate with new people is an important component of many economically valuable AI tasks, from household robotics to autonomous driving. However, generalizing to novel humans requires training on data that captures the diversity of human behaviors. Adversarial training is one avenue for searching for such data and ensuring that agents are robust. However, it is difficult to apply in the cooperative setting because adversarial policies intentionally learn to sabotage the task instead of simulating valid cooperation partners. To address this challenge, we propose a novel strategy for overcoming self-sabotage that combines a pre-trained generative model to simulate valid cooperative agent policies with adversarial training to maximize regret. We call our method GOAT: Generative Online Adversarial Training. In this framework, the GOAT dynamically searches for and generates coordination strategies where the learning policy -- the Cooperator agent -- underperforms. GOAT enables better generalization by exposing the Cooperator to various challenging interaction scenarios. We maintain realistic coordination strategies by updating only the generative model's embedding while keeping its parameters frozen, thus avoiding adversarial exploitation. We evaluate GOAT with real human partners, and the results demonstrate state-of-the-art performance on the Overcooked benchmark, highlighting its effectiveness in generalizing to diverse human behaviors.