TrafficLLM: Enhancing Large Language Models for Network Traffic Analysis with Generic Traffic Representation

📅 2025-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing machine learning methods and large language models (LLMs) exhibit weak generalization and poor adaptability to heterogeneous raw network traffic. To address this, we propose the first LLM framework specifically designed for network traffic analysis. Our approach introduces a traffic-domain tokenization mechanism and a novel two-stage fine-tuning paradigm—pre-alignment followed by task-specific refinement—enabling end-to-end modeling and universal representation learning directly from raw traffic. The framework supports cross-task threat detection and traffic generation with strong extensibility. Evaluated across 10 diverse scenarios comprising 229 traffic types, it achieves an F1-score of 0.9875 for detection and 0.9483 for generation, outperforming state-of-the-art methods by up to 80.12%. It also improves generalization to unseen traffic by 18.6% and demonstrates excellent accuracy in enterprise deployments.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) powered network traffic analysis has been widely used for the purpose of threat detection. Unfortunately, their generalization across different tasks and unseen data is very limited. Large language models (LLMs), known for their strong generalization capabilities, have shown promising performance in various domains. However, their application to the traffic analysis domain is limited due to significantly different characteristics of network traffic. To address the issue, in this paper, we propose TrafficLLM, which introduces a dual-stage fine-tuning framework to learn generic traffic representation from heterogeneous raw traffic data. The framework uses traffic-domain tokenization, dual-stage tuning pipeline, and extensible adaptation to help LLM release generalization ability on dynamic traffic analysis tasks, such that it enables traffic detection and traffic generation across a wide range of downstream tasks. We evaluate TrafficLLM across 10 distinct scenarios and 229 types of traffic. TrafficLLM achieves F1-scores of 0.9875 and 0.9483, with up to 80.12% and 33.92% better performance than existing detection and generation methods. It also shows strong generalization on unseen traffic with an 18.6% performance improvement. We further evaluate TrafficLLM in real-world scenarios. The results confirm that TrafficLLM is easy to scale and achieves accurate detection performance on enterprise traffic.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLMs for network traffic analysis generalization
Overcoming limited generalization in ML traffic threat detection
Adapting LLMs to heterogeneous raw traffic data characteristics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-stage fine-tuning for traffic representation
Traffic-domain tokenization for LLM adaptation
Extensible adaptation for dynamic traffic tasks
🔎 Similar Papers
No similar papers found.
T
Tianyu Cui
Zhongguancun Laboratory, Beijing, 100093, China
Xinjie Lin
Xinjie Lin
Zhongguancun Lab
Traffic AnalysisNetwork SecurityAI SecurityNetwork Measurement
Sijia Li
Sijia Li
Institute of Information Engineering, Chinese Academy of Sciences
Miao Chen
Miao Chen
Indiana University Bloomington
Natural Language ProcessingData ScienceText MiningOntologySemantic Web
Q
Qilei Yin
Zhongguancun Laboratory, Beijing, 100093, China
Q
Qi Li
Zhongguancun Laboratory, Beijing 100093, China, and also with the Tsinghua University, Beijing, 100094, China
K
Ke Xu
Zhongguancun Laboratory, Beijing 100093, China, and also with the Tsinghua University, Beijing, 100094, China