🤖 AI Summary
This work addresses key challenges in knowledge distillation for multi-agent reinforcement learning, including the difficulty of synthesizing high-performing teacher policies, poor out-of-distribution generalization, and mismatched observation spaces between teachers and students. To overcome these limitations, the authors propose the HINT framework, which leverages hierarchical reinforcement learning within a centralized training with decentralized execution paradigm to construct scalable, high-performance teachers. HINT introduces a pseudo-off-policy update mechanism that enables teachers to refine their policies using student-collected experiences and incorporates a performance-based guidance filtering strategy to preserve critical distillation signals. This approach effectively mitigates out-of-distribution adaptation and observation mismatch issues, achieving 1.6–2.65× higher success rates than baseline methods on complex cooperative tasks such as FireCommander and MARINE, thereby significantly enhancing multi-agent coordination performance.
📝 Abstract
Knowledge distillation (KD) has the potential to accelerate MARL by employing a centralized teacher for decentralized students but faces key bottlenecks. Specifically, there are (1) challenges in synthesizing high-performing teaching policies in complex domains, (2) difficulties when teachers must reason in out-of-distribution (OOD) states, and (3) mismatches between the decentralized students'and the centralized teacher's observation spaces. To address these limitations, we propose HINT (Hierarchical INteractive Teacher-based transfer), a novel KD framework for MARL in a centralized training, decentralized execution setup. By leveraging hierarchical RL, HINT provides a scalable, high-performing teacher. Our key innovation, pseudo off-policy RL, enables the teacher policy to be updated using both teacher and student experience, thereby improving OOD adaptation. HINT also applies performance-based filtering to retain only outcome-relevant guidance, reducing observation mismatches. We evaluate HINT on challenging cooperative domains (e.g., FireCommander for resource allocation, MARINE for tactical combat). Across these benchmarks, HINT outperforms baselines, achieving improvements of 60% to 165% in success rate.