π€ AI Summary
Poor interpretability of multi-agent reinforcement learning (MARL) poses critical safety and trust bottlenecks for real-world deployment. To address this, we propose HYDRAVIPERβa novel MARL algorithm featuring (i) a team-level expected performance-driven cooperative training mechanism and (ii) an environment interaction budget-aware adaptive allocation strategy, jointly achieving Pareto-optimal trade-offs between performance and computational efficiency. HYDRAVIPER integrates decision-tree-based policy modeling, multi-agent cooperative optimization, budget-aware reinforcement learning, and interpretable policy distillation to construct a high-performance, inherently interpretable surrogate policy model. Evaluated on collaborative benchmark tasks and traffic signal control, HYDRAVIPER achieves state-of-the-art (SOTA) performance while accelerating inference by multiple orders of magnitude. Crucially, it maintains stable Pareto-frontier performance across varying interaction budgets, demonstrating robustness and practical deployability.
π Abstract
Poor interpretability hinders the practical applicability of multi-agent reinforcement learning (MARL) policies. Deploying interpretable surrogates of uninterpretable policies enhances the safety and verifiability of MARL for real-world applications. However, if these surrogates are to interact directly with the environment within human supervisory frameworks, they must be both performant and computationally efficient. Prior work on interpretable MARL has either sacrificed performance for computational efficiency or computational efficiency for performance. To address this issue, we propose HYDRAVIPER, a decision tree-based interpretable MARL algorithm. HYDRAVIPER coordinates training between agents based on expected team performance, and adaptively allocates budgets for environment interaction to improve computational efficiency. Experiments on standard benchmark environments for multi-agent coordination and traffic signal control show that HYDRAVIPER matches the performance of state-of-the-art methods using a fraction of the runtime, and that it maintains a Pareto frontier of performance for different interaction budgets.