🤖 AI Summary
This work addresses the challenge of learning multimodal, dynamically aligned collaborative behaviors from heterogeneous team demonstrations in partially observable, communication-constrained multi-agent and human–robot collaboration settings. We propose DTIL—a hierarchical multi-agent imitation learning framework that enables decoupled policy learning from heterogeneous demonstrations—the first of its kind. DTIL employs factorized decoupled training and Wasserstein distribution matching to mitigate error accumulation, while supporting scalable modeling over long-horizon, high-dimensional state spaces. Evaluated across diverse collaborative tasks, DTIL significantly outperforms existing multi-agent imitation learning (MAIL) methods. It accurately reproduces the expert team’s diverse execution patterns, demonstrating improved policy robustness and generalization under partial observability and limited communication.
📝 Abstract
Successful collaboration requires team members to stay aligned, especially in complex sequential tasks. Team members must dynamically coordinate which subtasks to perform and in what order. However, real-world constraints like partial observability and limited communication bandwidth often lead to suboptimal collaboration. Even among expert teams, the same task can be executed in multiple ways. To develop multi-agent systems and human-AI teams for such tasks, we are interested in data-driven learning of multimodal team behaviors. Multi-Agent Imitation Learning (MAIL) provides a promising framework for data-driven learning of team behavior from demonstrations, but existing methods struggle with heterogeneous demonstrations, as they assume that all demonstrations originate from a single team policy. Hence, in this work, we introduce DTIL: a hierarchical MAIL algorithm designed to learn multimodal team behaviors in complex sequential tasks. DTIL represents each team member with a hierarchical policy and learns these policies from heterogeneous team demonstrations in a factored manner. By employing a distribution-matching approach, DTIL mitigates compounding errors and scales effectively to long horizons and continuous state representations. Experimental results show that DTIL outperforms MAIL baselines and accurately models team behavior across a variety of collaborative scenarios.