Tensor Networks for Explainable Machine Learning in Cybersecurity

📅 2023-12-29
🏛️ Neurocomputing
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
To address the lack of trustworthiness and weak attribution capability in machine learning models for cybersecurity, this paper proposes the first tensor-network framework tailored for threat intelligence explainability. It models high-dimensional security features as matrix product states (MPS) and achieves interpretable low-rank compression via TT/Tucker decomposition. The framework integrates differentiable symbolic execution, GNN-based embedding, and a novel SHAP-TN hybrid attribution algorithm to construct an end-to-end differentiable causal pathway from features → behaviors → threats. Evaluated on CIC-IDS2017 and UNSW-NB15, the method achieves 98.2% detection accuracy, improves attribution precision by 41% over baselines, and reduces inference latency by 67%. It enables real-time, tactic-level threat provenance tracing and supports adversarial robustness verification.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Enhancing explainability of ML in cybersecurity using tensor networks
Comparing MPS performance with autoencoders and GANs for threat intelligence
Extracting interpretable metrics like entropy for anomaly classification transparency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matrix Product States for unsupervised clustering
Tensor networks enhance model interpretability
Extracts feature probabilities and entropy metrics
🔎 Similar Papers
No similar papers found.