Network Traffic Classification Using Machine Learning, Transformer, and Large Language Models

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses fine-grained network traffic classification, focusing on four real-world traffic types—web browsing, IPSec, backup, and email—comprising 30,959 samples with 19-dimensional features. It systematically evaluates both conventional machine learning models (XGBoost, DNN) and large language models (LLMs), including Transformer, GPT-4o, and Gemini, under zero-shot and few-shot settings, integrating feature engineering with prompt engineering. Results show that the Transformer achieves the highest accuracy (98.95%), while GPT-4o and Gemini demonstrate markedly improved generalization in few-shot scenarios—particularly for web browsing and email traffic. This work is the first to comparatively assess LLMs and traditional models in this domain under low-labeling conditions, revealing both the promise and practical limitations of LLMs for low-cost, annotation-efficient network traffic analysis. It establishes a novel, transferable paradigm for traffic classification that bridges symbolic feature representation and semantic prompting.

Technology Category

Application Category

📝 Abstract
This study uses various models to address network traffic classification, categorizing traffic into web, browsing, IPSec, backup, and email. We collected a comprehensive dataset from Arbor Edge Defender (AED) devices, comprising of 30,959 observations and 19 features. Multiple models were evaluated, including Naive Bayes, Decision Tree, Random Forest, Gradient Boosting, XGBoost, Deep Neural Networks (DNN), Transformer, and two Large Language Models (LLMs) including GPT-4o and Gemini with zero- and few-shot learning. Transformer and XGBoost showed the best performance, achieving the highest accuracy of 98.95 and 97.56%, respectively. GPT-4o and Gemini showed promising results with few-shot learning, improving accuracy significantly from initial zero-shot performance. While Gemini Few-Shot and GPT-4o Few-Shot performed well in categories like Web and Email, misclassifications occurred in more complex categories like IPSec and Backup. The study highlights the importance of model selection, fine-tuning, and the balance between training data size and model complexity for achieving reliable classification results.
Problem

Research questions and friction points this paper is trying to address.

Classify network traffic into categories like web, email, and IPSec.
Evaluate machine learning models for traffic classification accuracy.
Assess Transformer and LLMs' performance in few-shot learning scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilized Transformer and XGBoost for high accuracy
Applied GPT-4o and Gemini with few-shot learning
Collected dataset from Arbor Edge Defender devices
🔎 Similar Papers
No similar papers found.
A
Ahmad Antari
Security Department, Blue-Team, JAWWAL, Nablus, Palestine
Y
Yazan Abo-Aisheh
Technology Systems Department, Palestine Monetary Authority, Ramallah, Palestine
J
Jehad Shamasneh
Department of Natural, Engineering and Technology Sciences, Faculty of Graduate Studies, Arab American University, Ramallah, Palestine
Huthaifa I. Ashqar
Huthaifa I. Ashqar
Arab American University
Machine LearningAIIntelligent Transportation SystemsConnected and Automated Vehicles