π€ AI Summary
Reinforcement learning (RL)-driven network warfare simulations suffer from poor interpretability of defensive agent decisions. Method: Focusing on the open-source CAGE Challenge 2 RL defense agent, we propose an event-driven, fine-grained explainability framework: it simplifies state/action spaces, traces critical offensive/defensive events (e.g., penetration, cleanup), and models state-transition effectiveness to systematically uncover the agentβs online decision-making logic. Contribution/Results: Experiments reveal action failure rates of 40%β99%, yet most intrusions are mitigated within 1β2 steps; decoy services block 94% of privilege-escalation attempts. Our analysis quantifies the effectiveness boundaries of RL-based defense behaviors, empirically validates the substantial robustness improvement conferred by decoys, and provides evidence-based guidance for enhancing simulation fidelity in CAGE Challenge 4.
π Abstract
We analyze two open source deep reinforcement learning agents submitted to the CAGE Challenge 2 cyber defense challenge, where each competitor submitted an agent to defend a simulated network against each of several provided rules-based attack agents. We demonstrate that one can gain interpretability of agent successes and failures by simplifying the complex state and action spaces and by tracking important events, shedding light on the fine-grained behavior of both the defense and attack agents in each experimental scenario. By analyzing important events within an evaluation episode, we identify patterns in infiltration and clearing events that tell us how well the attacker and defender played their respective roles; for example, defenders were generally able to clear infiltrations within one or two timesteps of a host being exploited. By examining transitions in the environment's state caused by the various possible actions, we determine which actions tended to be effective and which did not, showing that certain important actions are between 40% and 99% ineffective. We examine how decoy services affect exploit success, concluding for instance that decoys block up to 94% of exploits that would directly grant privileged access to a host. Finally, we discuss the realism of the challenge and ways that the CAGE Challenge 4 has addressed some of our concerns.