LLMServingSim 2.0: A Unified Simulator for Heterogeneous and Disaggregated LLM Serving Infrastructure

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing simulators struggle to jointly model the runtime software-hardware interactions in heterogeneous hardware environments and disaggregated large language model (LLM) serving architectures, limiting systematic evaluation of performance, memory, and power consumption. This work proposes the first unified system-level simulator that integrates service scheduling and hardware behavior within a shared runtime loop, explicitly capturing dynamic interactions—including batching, routing, offloading, and energy efficiency—under heterogeneous accelerators, near-memory computing, and disaggregated resource deployment. The framework employs a profiling-driven modeling approach, enabling scalable integration of emerging hardware and establishing an evaluation pathway for co-design between hardware and serving systems. Experimental results demonstrate that simulations complete in approximately 10 minutes with an average error of only 0.97% on key metrics, achieving both high fidelity and practical usability.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) serving infrastructures are undergoing a shift toward heterogeneity and disaggregation. Modern deployments increasingly integrate diverse accelerators and near-memory processing technologies, introducing significant hardware heterogeneity, while system software increasingly separates computation, memory, and model components across distributed resources to improve scalability and efficiency. As a result, LLM serving performance is no longer determined by hardware or software choices in isolation, but by their runtime interaction through scheduling, data movement, and interconnect behavior. However, understanding these interactions remains challenging, as existing simulators lack the ability to jointly model heterogeneous hardware and disaggregated serving techniques within a unified, runtime-driven framework. This paper presents LLMServingSim 2.0, a unified system-level simulator designed to make runtime-driven hardware-software interactions in heterogeneous and disaggregated LLM serving infrastructures explicit and analyzable. LLMServingSim 2.0 embeds serving decisions and hardware behavior into a single runtime loop, enabling interaction-aware modeling of batching, routing, offloading, memory, and power. The simulator supports extensible integration of emerging accelerators and memory systems through profile-based modeling, while capturing dynamic serving behavior and system-level effects. We validate LLMServingSim 2.0 against real deployments, showing that it reproduces key performance, memory, and power metrics with an average error of 0.97%, while maintaining simulation times of around 10 minutes even for complex configurations. These results demonstrate that LLMServingSim 2.0 provides a practical bridge between hardware innovation and serving-system design, enabling systematic exploration and co-design for next-generation LLM serving infrastructures.
Problem

Research questions and friction points this paper is trying to address.

heterogeneous
disaggregated
LLM serving
runtime interaction
system simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

heterogeneous LLM serving
disaggregated infrastructure
runtime-driven simulation
hardware-software co-design
system-level modeling
🔎 Similar Papers
No similar papers found.
J
Jaehong Cho
School of Computing, KAIST, Daejeon, South Korea
H
Hyunmin Choi
School of Computing, KAIST, Daejeon, South Korea
Guseul Heo
Guseul Heo
Ph.D. student
Jongse Park
Jongse Park
Associate Professor; School of Computing; KAIST
Computer ArchitectureHW/SW CodesignAI SystemsAutonomous Systems