🤖 AI Summary
To address the challenges of limited labeled data, poor generalization, and insufficient interpretability in network intrusion detection (NID), this paper proposes the first end-to-end NID framework integrating neuro-symbolic AI with transfer learning. The method enhances model adaptability to target domains via cross-dataset knowledge transfer and incorporates uncertainty quantification to improve robustness and decision reliability. Crucially, symbolic reasoning rules are embedded into a deep neural network architecture to enable interpretable detection outputs. Experimental results on multiple standard NID benchmarks demonstrate that the proposed approach significantly outperforms conventional few-shot learning models—achieving an average 8.3% improvement in F1-score, superior adversarial robustness, and enhanced cross-domain generalization. Moreover, it provides verifiable uncertainty estimates for detection decisions. This work establishes a novel paradigm for high-assurance NID systems grounded in principled integration of learning and reasoning.
📝 Abstract
Transfer learning is commonly utilized in various fields such as computer vision, natural language processing, and medical imaging due to its impressive capability to address subtasks and work with different datasets. However, its application in cybersecurity has not been thoroughly explored. In this paper, we present an innovative neurosymbolic AI framework designed for network intrusion detection systems, which play a crucial role in combating malicious activities in cybersecurity. Our framework leverages transfer learning and uncertainty quantification. The findings indicate that transfer learning models, trained on large and well-structured datasets, outperform neural-based models that rely on smaller datasets, paving the way for a new era in cybersecurity solutions.