🤖 AI Summary
Dynamic enterprise networks suffer from frequent device changes, undermining the generalization capability of autonomous defense systems due to their reliance on fixed-topology policy models.
Method: We propose an entity-based reinforcement learning framework that decouples observation and action spaces into variable numbers of discrete network entities—eliminating rigid dependence on predefined topologies. Our approach employs a Transformer architecture to model inter-entity relationships and integrates Yawning Titan for simulation, coupled with a multi-topology generalization training paradigm to enable combinatorial generalization and zero-shot transfer.
Results: Experiments demonstrate zero-shot generalization across unseen network scales (i.e., node counts not encountered during training) and significantly outperform MLP baselines under multi-topology evaluation. Performance remains competitive with single-topology training, confirming strong adaptability to real-world dynamic networks and practical deployability.
📝 Abstract
A significant challenge for autonomous cyber defence is ensuring a defensive agent's ability to generalise across diverse network topologies and configurations. This capability is necessary for agents to remain effective when deployed in dynamically changing environments, such as an enterprise network where devices may frequently join and leave. Standard approaches to deep reinforcement learning, where policies are parameterised using a fixed-input multi-layer perceptron (MLP) expect fixed-size observation and action spaces. In autonomous cyber defence, this makes it hard to develop agents that generalise to environments with network topologies different from those trained on, as the number of nodes affects the natural size of the observation and action spaces. To overcome this limitation, we reframe the problem of autonomous network defence using entity-based reinforcement learning, where the observation and action space of an agent are decomposed into a collection of discrete entities. This framework enables the use of policy parameterisations specialised in compositional generalisation. We train a Transformer-based policy on the Yawning Titan cyber-security simulation environment and test its generalisation capabilities across various network topologies. We demonstrate that this approach significantly outperforms an MLP-based policy when training across fixed-size networks of varying topologies, and matches performance when training on a single network. We also demonstrate the potential for zero-shot generalisation to networks of a different size to those seen in training. These findings highlight the potential for entity-based reinforcement learning to advance the field of autonomous cyber defence by providing more generalisable policies capable of handling variations in real-world network environments.