🤖 AI Summary
Learning coordinated behaviors from multimodal expert demonstrations in multi-robot systems remains challenging; existing diffusion-based approaches rely on centralized planning or explicit inter-agent communication. Method: This paper proposes a decentralized diffusion policy framework grounded in the Centralized Training with Decentralized Execution (CTDE) paradigm. During training, it jointly optimizes multimodal policy distributions using global state information; during execution, each agent acts solely on local observations, achieving communication-free, diverse coordination via implicit cooperation. Contribution/Results: The core innovation lies in integrating diffusion models into multi-agent imitation learning to model behavioral modality diversity end-to-end. Evaluated in both simulation and real-robot experiments, the method successfully reproduces multiple cooperative patterns, consistently outperforming state-of-the-art methods in performance, adaptability, and robustness.
📝 Abstract
As robots become more integrated in society, their ability to coordinate with other robots and humans on multi-modal tasks (those with multiple valid solutions) is crucial. We propose to learn such behaviors from expert demonstrations via imitation learning (IL). However, when expert demonstrations are multi-modal, standard IL approaches can struggle to capture the diverse strategies, hindering effective coordination. Diffusion models are known to be effective at handling complex multi-modal trajectory distributions in single-agent systems. Diffusion models have also excelled in multi-agent scenarios where multi-modality is more common and crucial to learning coordinated behaviors. Typically, diffusion-based approaches require a centralized planner or explicit communication among agents, but this assumption can fail in real-world scenarios where robots must operate independently or with agents like humans that they cannot directly communicate with. Therefore, we propose MIMIC-D, a Centralized Training, Decentralized Execution (CTDE) paradigm for multi-modal multi-agent imitation learning using diffusion policies. Agents are trained jointly with full information, but execute policies using only local information to achieve implicit coordination. We demonstrate in both simulation and hardware experiments that our method recovers multi-modal coordination behavior among agents in a variety of tasks and environments, while improving upon state-of-the-art baselines.