🤖 AI Summary
To address the lack of trustworthiness and weak attribution capability in machine learning models for cybersecurity, this paper proposes the first tensor-network framework tailored for threat intelligence explainability. It models high-dimensional security features as matrix product states (MPS) and achieves interpretable low-rank compression via TT/Tucker decomposition. The framework integrates differentiable symbolic execution, GNN-based embedding, and a novel SHAP-TN hybrid attribution algorithm to construct an end-to-end differentiable causal pathway from features → behaviors → threats. Evaluated on CIC-IDS2017 and UNSW-NB15, the method achieves 98.2% detection accuracy, improves attribution precision by 41% over baselines, and reduces inference latency by 67%. It enables real-time, tactic-level threat provenance tracing and supports adversarial robustness verification.