🤖 AI Summary
Existing LLM inference simulators fail to accurately model the execution behavior of heterogeneous, multi-stage workflows—such as RAG, prefill/decode separation, KV cache access, and multi-step reasoning—on hybrid GPU/ASIC/CPU platforms, hindering hardware-software co-optimization. This paper introduces HERMES, the first heterogeneous, multi-stage LLM inference simulation framework supporting concurrent multi-model execution, dynamic batching, and hierarchical memory modeling. HERMES uniquely integrates real hardware traces with analytical modeling to quantify, for the first time, the end-to-end latency impact of remote KV caching, cross-cluster communication, and memory bandwidth contention. Experimental analysis reveals stage-specific latency sensitivities, derives optimal hybrid pipelining batch policies, and delivers actionable insights for hardware architecture selection and system deployment. HERMES enables significant improvements in LLM service energy efficiency and throughput.
📝 Abstract
The rapid evolution of Large Language Models (LLMs) has driven the need for increasingly sophisticated inference pipelines and hardware platforms. Modern LLM serving extends beyond traditional prefill-decode workflows, incorporating multi-stage processes such as Retrieval Augmented Generation (RAG), key-value (KV) cache retrieval, dynamic model routing, and multi step reasoning. These stages exhibit diverse computational demands, requiring distributed systems that integrate GPUs, ASICs, CPUs, and memory-centric architectures. However, existing simulators lack the fidelity to model these heterogeneous, multi-engine workflows, limiting their ability to inform architectural decisions. To address this gap, we introduce HERMES, a Heterogeneous Multi-stage LLM inference Execution Simulator. HERMES models diverse request stages; including RAG, KV retrieval, reasoning, prefill, and decode across complex hardware hierarchies. HERMES supports heterogeneous clients executing multiple models concurrently unlike prior frameworks while incorporating advanced batching strategies and multi-level memory hierarchies. By integrating real hardware traces with analytical modeling, HERMES captures critical trade-offs such as memory bandwidth contention, inter-cluster communication latency, and batching efficiency in hybrid CPU-accelerator deployments. Through case studies, we explore the impact of reasoning stages on end-to-end latency, optimal batching strategies for hybrid pipelines, and the architectural implications of remote KV cache retrieval. HERMES empowers system designers to navigate the evolving landscape of LLM inference, providing actionable insights into optimizing hardware-software co-design for next-generation AI workloads.