ServeGen: Workload Characterization and Generation of Large Language Model Serving in Production

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing studies lack systematic characterization of production workloads for large language models (LLMs), particularly suffering from insufficient scale and dimensionality in multimodal and reasoning-model scenarios. Method: This paper presents the first end-to-end workload characterization—based on real-world logs from global cloud platforms—spanning language, multimodal, and reasoning models. It introduces a novel, client-behavior-driven workload generation paradigm that overcomes limitations of conventional static or statistical approaches. Contribution/Results: We design a reproducible and verifiable synthetic workload framework, validated via production A/B testing: it reduces resource under-provisioning by 50% and significantly improves benchmark fidelity. The framework will be open-sourced to advance research and optimization of LLM serving systems.

Technology Category

Application Category

📝 Abstract
With the widespread adoption of Large Language Models (LLMs), serving LLM inference requests has become an increasingly important task, attracting active research advancements. Practical workloads play an essential role in this process: they are critical for motivating and benchmarking serving techniques and systems. However, the existing understanding of real-world LLM serving workloads is limited due to the lack of a comprehensive workload characterization. Prior analyses remain insufficient in scale and scope, thus failing to fully capture intricate workload characteristics. In this paper, we fill the gap with an in-depth characterization of LLM serving workloads collected from our worldwide cloud inference serving service, covering not only language models but also emerging multimodal and reasoning models, and unveiling important new findings in each case. Moreover, based on our findings, we propose ServeGen, a principled framework for generating realistic LLM serving workloads by composing them on a per-client basis. A practical use case in production validates that ServeGen avoids 50% under-provisioning compared to naive workload generation, demonstrating ServeGen's advantage in performance benchmarking. We will open-source ServeGen to foster future research.
Problem

Research questions and friction points this paper is trying to address.

Lack of comprehensive characterization of real-world LLM serving workloads
Insufficient scale and scope in prior workload analyses
Need for realistic workload generation for performance benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Characterizes LLM serving workloads globally
Proposes ServeGen for realistic workload generation
Validates ServeGen with 50% under-provisioning reduction
🔎 Similar Papers
No similar papers found.
Y
Yuxing Xiang
School of Computer Science, Peking University
X
Xue Li
Alibaba Group
K
Kun Qian
Alibaba Group
Wenyuan Yu
Wenyuan Yu
Alibaba Group
Graph computationdata managementdistributed systems and parallel computation
Ennan Zhai
Ennan Zhai
Alibaba Group
Computer NetworksSecurityProgramming LanguageCloud Computing
X
Xin Jin
School of Computer Science, Peking University