Beyond Rewards in Reinforcement Learning for Cyber Defence

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that dense reward formulations in reinforcement learning–based cyber defense often lead to suboptimal and high-risk policies misaligned with true defensive objectives. The study systematically evaluates the impact of sparse versus dense rewards on policy behavior, training stability, and defensive efficacy, proposing a goal-aligned sparse reward design. It introduces an innovative “ground-truth label” evaluation framework to uncover the intrinsic relationships among reward structure, action space, and policy risk. Extensive experiments across multiple network scales and two mainstream cyber-range simulation environments—employing both policy gradient and value-based algorithms—demonstrate that when sparse rewards are properly aligned with defensive goals and sufficiently frequent, they significantly enhance training reliability and yield safer, more efficient policies that minimize costly defensive actions.

Technology Category

Application Category

📝 Abstract
Recent years have seen an explosion of interest in autonomous cyber defence agents trained to defend computer networks using deep reinforcement learning. These agents are typically trained in cyber gym environments using dense, highly engineered reward functions which combine many penalties and incentives for a range of (un)desirable states and costly actions. Dense rewards help alleviate the challenge of exploring complex environments but risk biasing agents towards suboptimal and potentially riskier solutions, a critical issue in complex cyber environments. We thoroughly evaluate the impact of reward function structure on learning and policy behavioural characteristics using a variety of sparse and dense reward functions, two well-established cyber gyms, a range of network sizes, and both policy gradient and value-based RL algorithms. Our evaluation is enabled by a novel ground truth evaluation approach which allows directly comparing between different reward functions, illuminating the nuanced inter-relationships between rewards, action space and the risks of suboptimal policies in cyber environments. Our results show that sparse rewards, provided they are goal aligned and can be encountered frequently, uniquely offer both enhanced training reliability and more effective cyber defence agents with lower-risk policies. Surprisingly, sparse rewards can also yield policies that are better aligned with cyber defender goals and make sparing use of costly defensive actions without explicit reward-based numerical penalties.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
cyber defence
reward function
sparse rewards
suboptimal policies
Innovation

Methods, ideas, or system contributions that make the work stand out.

sparse rewards
cyber defence
reinforcement learning
ground truth evaluation
reward engineering
🔎 Similar Papers
No similar papers found.