🤖 AI Summary
In autonomous cyber operations (ACO), reinforcement learning (RL) agents trained from scratch suffer from slow convergence and poor initial policy performance. To address this, this paper introduces, for the first time, teacher-guided learning into ACO and systematically evaluates four guidance paradigms—behavioral cloning, offline RL, curriculum learning, and reward shaping—within the CybORG simulation environment. Experimental results demonstrate that teacher guidance significantly improves early-decision quality (by 42% on average), accelerates convergence (reducing training steps by ~35%), and enhances policy robustness. The study empirically validates the efficacy of knowledge transfer in dynamic cybersecurity decision-making and establishes a reproducible methodological framework for efficient, trustworthy training of ACO agents.
📝 Abstract
Autonomous Cyber Operations (ACO) rely on Reinforcement Learning (RL) to train agents to make effective decisions in the cybersecurity domain. However, existing ACO applications require agents to learn from scratch, leading to slow convergence and poor early-stage performance. While teacher-guided techniques have demonstrated promise in other domains, they have not yet been applied to ACO. In this study, we implement four distinct teacher-guided techniques in the simulated CybORG environment and conduct a comparative evaluation. Our results demonstrate that teacher integration can significantly improve training efficiency in terms of early policy performance and convergence speed, highlighting its potential benefits for autonomous cybersecurity.