🤖 AI Summary
In multi-agent reinforcement learning (MARL) under dynamic and uncertain environments, existing approaches suffer from low sample efficiency in task decomposition and difficulty exploring the joint action space under partial observability. Method: We propose a two-level hierarchical framework: (i) an upper level that employs a conditional diffusion model to predict subtask effects on environmental states and rewards, yielding interpretable subtask representations; and (ii) a lower level that utilizes a multi-head attention-based mixing network to enhance value decomposition, establishing semantic mappings between individual and joint values while decoupling subtask selection from skill execution. Contribution/Results: The framework automatically infers collaborative patterns and subtask structures, enabling efficient long-horizon coordination. Evaluated on multiple benchmark tasks, it significantly improves sample efficiency, collaboration performance, and adaptability to environmental dynamics—outperforming state-of-the-art methods.
📝 Abstract
Task decomposition has shown promise in complex cooperative multi-agent reinforcement learning (MARL) tasks, which enables efficient hierarchical learning for long-horizon tasks in dynamic and uncertain environments. However, learning dynamic task decomposition from scratch generally requires a large number of training samples, especially exploring the large joint action space under partial observability. In this paper, we present the Conditional Diffusion Model for Dynamic Task Decomposition (C$ ext{D}^ ext{3}$T), a novel two-level hierarchical MARL framework designed to automatically infer subtask and coordination patterns. The high-level policy learns subtask representation to generate a subtask selection strategy based on subtask effects. To capture the effects of subtasks on the environment, C$ ext{D}^ ext{3}$T predicts the next observation and reward using a conditional diffusion model. At the low level, agents collaboratively learn and share specialized skills within their assigned subtasks. Moreover, the learned subtask representation is also used as additional semantic information in a multi-head attention mixing network to enhance value decomposition and provide an efficient reasoning bridge between individual and joint value functions. Experimental results on various benchmarks demonstrate that C$ ext{D}^ ext{3}$T achieves better performance than existing baselines.