Poster: Enhancing GNN Robustness for Network Intrusion Detection via Agent-based Analysis

📅 2025-06-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) for IoT intrusion detection suffer from insufficient robustness due to distribution shift and real-world adversarial attacks. Method: This paper proposes the first LLM-augmented GNN framework, wherein a large language model acts as a security expert agent to systematically model and analyze both black-box and white-box adversarial attack scenarios. The framework integrates network-flow graph modeling, a physically collected real-world adversarial dataset from a testbed, and an end-to-end training mechanism to enhance generalization. Contribution/Results: Experiments demonstrate that the proposed method significantly improves GNN-based detector stability and accuracy under diverse realistic adversarial attacks, achieving an average 12.7% F1-score gain. It constitutes the first empirical validation of LLMs as an interpretable, collaborative, defense-aware layer in network intrusion detection systems (NIDS).

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) show great promise for Network Intrusion Detection Systems (NIDS), particularly in IoT environments, but suffer performance degradation due to distribution drift and lack robustness against realistic adversarial attacks. Current robustness evaluations often rely on unrealistic synthetic perturbations and lack demonstrations on systematic analysis of different kinds of adversarial attack, which encompass both black-box and white-box scenarios. This work proposes a novel approach to enhance GNN robustness and generalization by employing Large Language Models (LLMs) in an agentic pipeline as simulated cybersecurity expert agents. These agents scrutinize graph structures derived from network flow data, identifying and potentially mitigating suspicious or adversarially perturbed elements before GNN processing. Our experiments, using a framework designed for realistic evaluation and testing with a variety of adversarial attacks including a dataset collected from physical testbed experiments, demonstrate that integrating LLM analysis can significantly improve the resilience of GNN-based NIDS against challenges, showcasing the potential of LLM agent as a complementary layer in intrusion detection architectures.
Problem

Research questions and friction points this paper is trying to address.

Enhancing GNN robustness against adversarial attacks in NIDS
Addressing distribution drift and unrealistic synthetic perturbations in evaluations
Integrating LLM agents to improve GNN-based intrusion detection resilience
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs simulate cybersecurity expert agents
Agents analyze and mitigate adversarial graph elements
Framework tests GNN robustness with realistic attacks
🔎 Similar Papers
No similar papers found.