🤖 AI Summary
Existing simulators struggle to jointly model the runtime software-hardware interactions in heterogeneous hardware environments and disaggregated large language model (LLM) serving architectures, limiting systematic evaluation of performance, memory, and power consumption. This work proposes the first unified system-level simulator that integrates service scheduling and hardware behavior within a shared runtime loop, explicitly capturing dynamic interactions—including batching, routing, offloading, and energy efficiency—under heterogeneous accelerators, near-memory computing, and disaggregated resource deployment. The framework employs a profiling-driven modeling approach, enabling scalable integration of emerging hardware and establishing an evaluation pathway for co-design between hardware and serving systems. Experimental results demonstrate that simulations complete in approximately 10 minutes with an average error of only 0.97% on key metrics, achieving both high fidelity and practical usability.
📝 Abstract
Large language model (LLM) serving infrastructures are undergoing a shift toward heterogeneity and disaggregation. Modern deployments increasingly integrate diverse accelerators and near-memory processing technologies, introducing significant hardware heterogeneity, while system software increasingly separates computation, memory, and model components across distributed resources to improve scalability and efficiency. As a result, LLM serving performance is no longer determined by hardware or software choices in isolation, but by their runtime interaction through scheduling, data movement, and interconnect behavior. However, understanding these interactions remains challenging, as existing simulators lack the ability to jointly model heterogeneous hardware and disaggregated serving techniques within a unified, runtime-driven framework.
This paper presents LLMServingSim 2.0, a unified system-level simulator designed to make runtime-driven hardware-software interactions in heterogeneous and disaggregated LLM serving infrastructures explicit and analyzable. LLMServingSim 2.0 embeds serving decisions and hardware behavior into a single runtime loop, enabling interaction-aware modeling of batching, routing, offloading, memory, and power. The simulator supports extensible integration of emerging accelerators and memory systems through profile-based modeling, while capturing dynamic serving behavior and system-level effects. We validate LLMServingSim 2.0 against real deployments, showing that it reproduces key performance, memory, and power metrics with an average error of 0.97%, while maintaining simulation times of around 10 minutes even for complex configurations. These results demonstrate that LLMServingSim 2.0 provides a practical bridge between hardware innovation and serving-system design, enabling systematic exploration and co-design for next-generation LLM serving infrastructures.