An Investigation into the Performance of Non-Contrastive Self-Supervised Learning Methods for Network Intrusion Detection

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitation of supervised intrusion detection in identifying unknown attacks, this paper systematically investigates the applicability of non-contrastive self-supervised learning (NCSSL) to network intrusion detection. We propose a unified evaluation framework integrating five NCSSL methods, three encoder architectures (including CNN and Transformer), and six data augmentation strategies, conducting 90 experiments on the UNSW-NB15 and 5G-NIDD datasets. Our study is the first to empirically demonstrate NCSSL’s effectiveness for unsupervised representation learning from unlabeled network traffic: the best-performing configuration substantially outperforms unsupervised baselines such as DeepSVDD and autoencoders. Key contributions include: (1) empirical validation that NCSSL enhances generalization to unseen attacks; (2) establishment of design principles for encoder–augmentation co-optimization; and (3) introduction of a lightweight, transferable representation learning paradigm tailored to cybersecurity applications.

Technology Category

Application Category

📝 Abstract
Network intrusion detection, a well-explored cybersecurity field, has predominantly relied on supervised learning algorithms in the past two decades. However, their limitations in detecting only known anomalies prompt the exploration of alternative approaches. Motivated by the success of self-supervised learning in computer vision, there is a rising interest in adapting this paradigm for network intrusion detection. While prior research mainly delved into contrastive self-supervised methods, the efficacy of non-contrastive methods, in conjunction with encoder architectures serving as the representation learning backbone and augmentation strategies that determine what is learned, remains unclear for effective attack detection. This paper compares the performance of five non-contrastive self-supervised learning methods using three encoder architectures and six augmentation strategies. Ninety experiments are systematically conducted on two network intrusion detection datasets, UNSW-NB15 and 5G-NIDD. For each self-supervised model, the combination of encoder architecture and augmentation method yielding the highest average precision, recall, F1-score, and AUCROC is reported. Furthermore, by comparing the best-performing models to two unsupervised baselines, DeepSVDD, and an Autoencoder, we showcase the competitiveness of the non-contrastive methods for attack detection. Code at: https://github.com/renje4z335jh4/non_contrastive_SSL_NIDS
Problem

Research questions and friction points this paper is trying to address.

Evaluating non-contrastive self-supervised learning for network intrusion detection
Comparing encoder architectures and augmentation strategies for attack detection
Assessing performance against unsupervised baselines using precision and recall metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-contrastive self-supervised learning for intrusion detection
Combining encoder architectures with augmentation strategies
Systematic evaluation on network datasets for attack detection
🔎 Similar Papers
No similar papers found.
H
Hamed Fard
Freie Universität, Berlin Germany
T
Tobias Schalau
Freie Universität, Berlin Germany
Gerhard Wunder
Gerhard Wunder
Professor Cybersecurity and AI, FU Berlin
AICybersecurityMachine LearningInformation Theory