From Flows to Words: Can Zero-/Few-Shot LLMs Detect Network Intrusions? A Grammar-Constrained, Calibrated Evaluation on UNSW-NB15

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the feasibility of deploying large language models (LLMs) for network intrusion detection without fine-tuning. To address the semantic sparsity of raw network flows, we propose an interpretable “flow-to-text” translation protocol that maps flows into compact textual records and augments them with lightweight, domain-informed Boolean flags (e.g., TTL anomalies, asymmetry) to enrich semantic representation. Our method integrates zero-shot and few-shot prompting, syntactically constrained output generation, and a few-shot threshold calibration mechanism to ensure decision stability and interpretability. Evaluated on a balanced subset of UNSW-NB15, a 7B-parameter LLM achieves a macro-F1 score of 0.78, while a 3B-parameter model attains F1 = 0.68 on a thousand-sample test set. To our knowledge, this is the first systematic validation of pure prompt engineering for intrusion detection—demonstrating training-free deployment, high human readability, operational simplicity, and strong domain adaptability.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) can reason over natural-language inputs, but their role in intrusion detection without fine-tuning remains uncertain. This study evaluates a prompt-only approach on UNSW-NB15 by converting each network flow to a compact textual record and augmenting it with lightweight, domain-inspired boolean flags (asymmetry, burst rate, TTL irregularities, timer anomalies, rare service/state, short bursts). To reduce output drift and support measurement, the model is constrained to produce structured, grammar-valid responses, and a single decision threshold is calibrated on a small development split. We compare zero-shot, instruction-guided, and few-shot prompting to strong tabular and neural baselines under identical splits, reporting accuracy, precision, recall, F1, and macro scores. Empirically, unguided prompting is unreliable, while instructions plus flags substantially improve detection quality; adding calibrated scoring further stabilizes results. On a balanced subset of two hundred flows, a 7B instruction-tuned model with flags reaches macro-F1 near 0.78; a lighter 3B model with few-shot cues and calibration attains F1 near 0.68 on one thousand examples. As the evaluation set grows to two thousand flows, decision quality decreases, revealing sensitivity to coverage and prompting. Tabular baselines remain more stable and faster, yet the prompt-only pipeline requires no gradient training, produces readable artifacts, and adapts easily through instructions and flags. Contributions include a flow-to-text protocol with interpretable cues, a calibration method for thresholding, a systematic baseline comparison, and a reproducibility bundle with prompts, grammar, metrics, and figures.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' zero-shot intrusion detection capability without fine-tuning
Converting network flows to text with domain-specific boolean flags
Assessing grammar-constrained prompting against traditional detection methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Converts network flows to compact textual records
Augments data with domain-inspired boolean flags
Uses grammar-constrained structured responses for calibration
🔎 Similar Papers
No similar papers found.
M
Mohammad Abdul Rehman
Future Data Minds Research Lab, Australia
S
Syed Imad Ali Shah
Future Data Minds Research Lab, Australia
Abbas Anwar
Abbas Anwar
FutureDataMinds, Abdul Wali Khan University Mardan, KPK, Pakistan
AIMachine LearningDeep LearningComputer Vision
N
Noor Islam
Future Data Minds Research Lab, Australia