Introducing Large Language Models as the Next Challenging Internet Traffic Source

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies large language models (LLMs) and generative AI as the next generation of “killer” Internet traffic sources—succeeding P2P and video streaming—driving unprecedented multilingual, multimodal (text/speech/image) interaction traffic. To address the lack of empirical characterization, we introduce the first measurement framework for remote user–agent interactions, conducting end-to-end request–response packet capture and statistical analysis across 2025 mainstream open-source LLMs, integrated with traffic modeling and standard deviation evaluation. Experimental results show an average prompt-response exchange consumes 7,593 ± 369 bytes—comparable to conventional web browsing or email—confirming LLMs’ emergent dominance in network load. Key contributions include: (1) formalizing the LLM traffic source paradigm; (2) establishing the first reproducible, empirically grounded methodology for LLM interaction traffic measurement; and (3) quantitatively validating its scale effects within cloud–edge collaborative architectures.

Technology Category

Application Category

📝 Abstract
This article explores the growing impact of large language models (LLMs) and Generative AI (GenAI) tools on Internet traffic, focusing on their role as a new and significant source of network load. As these AI tools continue to gain importance in applications ranging from virtual assistants to content generation, the volume of traffic they generate is expected to increase massively. These models use the Internet as the global infrastructure for delivering multimedia messages (text, voice, images, video, etc.) to users, by interconnecting users and devices with AI agents typically deployed in the cloud. We believe this represents a new paradigm that will lead to a considerable increase in network traffic, and network operators must be prepared to address the resulting demands. To support this claim, we provide a proof-of-concept and source code for measuring traffic in remote user-agent interactions, estimating the traffic generated per prompt for some of the most popular open-source LLMs in 2025. The average size of each prompt query and response is 7,593 bytes, with a standard deviation of 369 bytes. These numbers are comparable with email and web browsing traffic. However, we envision AI as the next ``killer application"that will saturate networks with traffic, such as Peer-to-Peer traffic and Video-on-demand dominated in previous decades.
Problem

Research questions and friction points this paper is trying to address.

Impact of LLMs and GenAI on Internet traffic growth
Measuring traffic demands from AI-user interactions
Preparing networks for AI-driven traffic saturation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Measuring traffic in remote user-agent interactions
Estimating traffic per prompt for open-source LLMs
Proof-of-concept for AI-generated network load
🔎 Similar Papers
No similar papers found.
Nataliia Koneva
Nataliia Koneva
Universidad Carlos III de Madrid
Packet-Optical NetworksRLAI
A
Alejandro Leonardo Garc'ia Navarro
Department of Telematic Engineering, Universidad Carlos III de Madrid (UC3M), Spain
A
Alfonso S'anchez-Maci'an
Department of Telematic Engineering, Universidad Carlos III de Madrid (UC3M), Spain
J
Jos'e Alberto Hern'andez
Department of Telematic Engineering, Universidad Carlos III de Madrid (UC3M), Spain
Moshe Zukerman
Moshe Zukerman
City University of Hong Kong
TeletrafficQueueing TheoryNetwork DesignNetwork OptimizationOptical Networks
'
'Oscar Gonz'alez de Dios
Telef'onica I+D, Madrid, Spain