🤖 AI Summary
This study addresses fine-grained network traffic classification, focusing on four real-world traffic types—web browsing, IPSec, backup, and email—comprising 30,959 samples with 19-dimensional features. It systematically evaluates both conventional machine learning models (XGBoost, DNN) and large language models (LLMs), including Transformer, GPT-4o, and Gemini, under zero-shot and few-shot settings, integrating feature engineering with prompt engineering. Results show that the Transformer achieves the highest accuracy (98.95%), while GPT-4o and Gemini demonstrate markedly improved generalization in few-shot scenarios—particularly for web browsing and email traffic. This work is the first to comparatively assess LLMs and traditional models in this domain under low-labeling conditions, revealing both the promise and practical limitations of LLMs for low-cost, annotation-efficient network traffic analysis. It establishes a novel, transferable paradigm for traffic classification that bridges symbolic feature representation and semantic prompting.
📝 Abstract
This study uses various models to address network traffic classification, categorizing traffic into web, browsing, IPSec, backup, and email. We collected a comprehensive dataset from Arbor Edge Defender (AED) devices, comprising of 30,959 observations and 19 features. Multiple models were evaluated, including Naive Bayes, Decision Tree, Random Forest, Gradient Boosting, XGBoost, Deep Neural Networks (DNN), Transformer, and two Large Language Models (LLMs) including GPT-4o and Gemini with zero- and few-shot learning. Transformer and XGBoost showed the best performance, achieving the highest accuracy of 98.95 and 97.56%, respectively. GPT-4o and Gemini showed promising results with few-shot learning, improving accuracy significantly from initial zero-shot performance. While Gemini Few-Shot and GPT-4o Few-Shot performed well in categories like Web and Email, misclassifications occurred in more complex categories like IPSec and Backup. The study highlights the importance of model selection, fine-tuning, and the balance between training data size and model complexity for achieving reliable classification results.