🤖 AI Summary
Existing studies lack systematic characterization of production workloads for large language models (LLMs), particularly suffering from insufficient scale and dimensionality in multimodal and reasoning-model scenarios.
Method: This paper presents the first end-to-end workload characterization—based on real-world logs from global cloud platforms—spanning language, multimodal, and reasoning models. It introduces a novel, client-behavior-driven workload generation paradigm that overcomes limitations of conventional static or statistical approaches.
Contribution/Results: We design a reproducible and verifiable synthetic workload framework, validated via production A/B testing: it reduces resource under-provisioning by 50% and significantly improves benchmark fidelity. The framework will be open-sourced to advance research and optimization of LLM serving systems.
📝 Abstract
With the widespread adoption of Large Language Models (LLMs), serving LLM inference requests has become an increasingly important task, attracting active research advancements. Practical workloads play an essential role in this process: they are critical for motivating and benchmarking serving techniques and systems. However, the existing understanding of real-world LLM serving workloads is limited due to the lack of a comprehensive workload characterization. Prior analyses remain insufficient in scale and scope, thus failing to fully capture intricate workload characteristics. In this paper, we fill the gap with an in-depth characterization of LLM serving workloads collected from our worldwide cloud inference serving service, covering not only language models but also emerging multimodal and reasoning models, and unveiling important new findings in each case. Moreover, based on our findings, we propose ServeGen, a principled framework for generating realistic LLM serving workloads by composing them on a per-client basis. A practical use case in production validates that ServeGen avoids 50% under-provisioning compared to naive workload generation, demonstrating ServeGen's advantage in performance benchmarking. We will open-source ServeGen to foster future research.