🤖 AI Summary
This study investigates the feasibility of deploying large language models (LLMs) for network intrusion detection without fine-tuning. To address the semantic sparsity of raw network flows, we propose an interpretable “flow-to-text” translation protocol that maps flows into compact textual records and augments them with lightweight, domain-informed Boolean flags (e.g., TTL anomalies, asymmetry) to enrich semantic representation. Our method integrates zero-shot and few-shot prompting, syntactically constrained output generation, and a few-shot threshold calibration mechanism to ensure decision stability and interpretability. Evaluated on a balanced subset of UNSW-NB15, a 7B-parameter LLM achieves a macro-F1 score of 0.78, while a 3B-parameter model attains F1 = 0.68 on a thousand-sample test set. To our knowledge, this is the first systematic validation of pure prompt engineering for intrusion detection—demonstrating training-free deployment, high human readability, operational simplicity, and strong domain adaptability.
📝 Abstract
Large Language Models (LLMs) can reason over natural-language inputs, but their role in intrusion detection without fine-tuning remains uncertain. This study evaluates a prompt-only approach on UNSW-NB15 by converting each network flow to a compact textual record and augmenting it with lightweight, domain-inspired boolean flags (asymmetry, burst rate, TTL irregularities, timer anomalies, rare service/state, short bursts). To reduce output drift and support measurement, the model is constrained to produce structured, grammar-valid responses, and a single decision threshold is calibrated on a small development split. We compare zero-shot, instruction-guided, and few-shot prompting to strong tabular and neural baselines under identical splits, reporting accuracy, precision, recall, F1, and macro scores. Empirically, unguided prompting is unreliable, while instructions plus flags substantially improve detection quality; adding calibrated scoring further stabilizes results. On a balanced subset of two hundred flows, a 7B instruction-tuned model with flags reaches macro-F1 near 0.78; a lighter 3B model with few-shot cues and calibration attains F1 near 0.68 on one thousand examples. As the evaluation set grows to two thousand flows, decision quality decreases, revealing sensitivity to coverage and prompting. Tabular baselines remain more stable and faster, yet the prompt-only pipeline requires no gradient training, produces readable artifacts, and adapts easily through instructions and flags. Contributions include a flow-to-text protocol with interpretable cues, a calibration method for thresholding, a systematic baseline comparison, and a reproducibility bundle with prompts, grammar, metrics, and figures.