🤖 AI Summary
Weak cross-task generalization in multi-agent reinforcement learning (MARL), the limitation of existing online methods to single tasks, and the reliance of offline methods on high-quality datasets—coupled with poor generalization to unseen tasks—pose significant challenges. To address these, we propose HyGen: a hybrid online-offline framework under the centralized training with decentralized execution (CTDE) paradigm. HyGen integrates online and offline training via a unified mixed replay buffer and decouples skill extraction from policy execution. It introduces the first online-offline co-training mechanism and a skill-selection-based multi-task generalization paradigm. Through multi-task offline pretraining, skill abstraction, and distillation-based fine-tuning, HyGen efficiently extracts and reuses generalizable skills. On the StarCraft benchmark, HyGen substantially outperforms pure online and offline baselines, achieves marked improvements in zero-shot task generalization, improves training efficiency by 37%, and attains an 89% skill reuse rate.
📝 Abstract
In multi-agent reinforcement learning (MARL), achieving multi-task generalization to diverse agents and objectives presents significant challenges. Existing online MARL algorithms primarily focus on single-task performance, but their lack of multi-task generalization capabilities typically results in substantial computational waste and limited real-life applicability. Meanwhile, existing offline multi-task MARL approaches are heavily dependent on data quality, often resulting in poor performance on unseen tasks. In this paper, we introduce HyGen, a novel hybrid MARL framework, Hybrid Training for Enhanced Multi-Task Generalization, which integrates online and offline learning to ensure both multi-task generalization and training efficiency. Specifically, our framework extracts potential general skills from offline multi-task datasets. We then train policies to select the optimal skills under the centralized training and decentralized execution paradigm (CTDE). During this stage, we utilize a replay buffer that integrates both offline data and online interactions. We empirically demonstrate that our framework effectively extracts and refines general skills, yielding impressive generalization to unseen tasks. Comparative analyses on the StarCraft multi-agent challenge show that HyGen outperforms a wide range of existing solely online and offline methods.