🤖 AI Summary
This study addresses the challenge posed by advanced persistent threats (APTs) that exploit "living-off-the-land" tactics and telemetry perturbations to induce misjudgments or overreactions in automated defense systems. To counter this, the authors propose a Causal Multi-Agent Decision Framework (C-MADF), which uniquely integrates structural causal models (SCMs) with adversarial dual-agent reinforcement learning. Operating within a structurally constrained action space, C-MADF enables causally consistent and interpretable autonomous defense decisions. The framework further introduces a novel policy divergence metric and a human-in-the-loop explanatory interface to enhance decision transparency. Evaluated on the CICIoT2023 dataset, the system achieves a false positive rate of 1.8%, along with a precision of 0.997, recall of 0.961, and F1-score of 0.979.
📝 Abstract
Autonomous agents are increasingly deployed in both offensive and defensive cyber operations, creating high-speed, closed-loop interactions in critical infrastructure environments. Advanced Persistent Threat (APT) actors exploit "Living off the Land" techniques and targeted telemetry perturbations to induce ambiguity in monitoring systems, causing automated defenses to overreact or misclassify benign behavior as malicious activity. Existing monolithic and multi-agent defense pipelines largely operate on correlation-based signals, lack structural constraints on response actions, and are vulnerable to reasoning drift under ambiguous or adversarial inputs. We present the Causal Multi-Agent Decision Framework (C-MADF), a structurally constrained architecture for autonomous cyber defense that integrates causal modeling with adversarial dual-policy control. C-MADF first learns a Structural Causal Model (SCM) from historical telemetry and compiles it into an investigation-level Directed Acyclic Graph (DAG) that defines admissible response transitions. This roadmap is formalized as a Markov Decision Process (MDP) whose action space is explicitly restricted to causally consistent transitions. Decision-making within this constrained space is performed by a dual-agent reinforcement learning system in which a threat-optimizing Blue-Team policy is counterbalanced by a conservatively shaped Red-Team policy. Inter-policy disagreement is quantified through a Policy Divergence Score and exposed via a human-in-the-loop interface equipped with an Explainability-Transparency Score that serves as an escalation signal under uncertainty. On the real-world CICIoT2023 dataset, C-MADF reduces the false-positive rate from 11.2%, 9.7%, and 8.4% in three cutting-edge literature baselines to 1.8%, while achieving 0.997 precision, 0.961 recall, and 0.979 F1-score.