🤖 AI Summary
To address challenges in network defense—including large-scale strategy spaces, partial observability, and adversarial stealthy deception—this paper proposes a hierarchical multi-agent reinforcement learning (MARL) framework. Methodologically, it introduces a master-subordinate dual-layer Proximal Policy Optimization (PPO) architecture: the master policy dynamically coordinates high-level decisions, while domain-informed subordinate policies are decoupled into transferable defensive actions (e.g., network reconnaissance, host recovery), enabling low-overhead fine-tuning and cross-scenario adaptation. Evaluated in the CybORG Cage 4 simulation environment, the approach achieves significantly faster convergence than baseline methods, increases clean-host rate by 18.7%, improves recovery action precision by 23.5%, and reduces false recovery rate by 31.2%. The core contribution is the first explainable, transferable hierarchical MARL paradigm tailored for realistic network defense—uniquely balancing strategy decomposability, training efficiency, and robustness against adaptive adversaries.
📝 Abstract
Recent advances in multi-agent reinforcement learning (MARL) have created opportunities to solve complex real-world tasks. Cybersecurity is a notable application area, where defending networks against sophisticated adversaries remains a challenging task typically performed by teams of security operators. In this work, we explore novel MARL strategies for building autonomous cyber network defenses that address challenges such as large policy spaces, partial observability, and stealthy, deceptive adversarial strategies. To facilitate efficient and generalized learning, we propose a hierarchical Proximal Policy Optimization (PPO) architecture that decomposes the cyber defense task into specific sub-tasks like network investigation and host recovery. Our approach involves training sub-policies for each sub-task using PPO enhanced with domain expertise. These sub-policies are then leveraged by a master defense policy that coordinates their selection to solve complex network defense tasks. Furthermore, the sub-policies can be fine-tuned and transferred with minimal cost to defend against shifts in adversarial behavior or changes in network settings. We conduct extensive experiments using CybORG Cage 4, the state-of-the-art MARL environment for cyber defense. Comparisons with multiple baselines across different adversaries show that our hierarchical learning approach achieves top performance in terms of convergence speed, episodic return, and several interpretable metrics relevant to cybersecurity, including the fraction of clean machines on the network, precision, and false positives on recoveries.