π€ AI Summary
Existing near-memory processing (NMP) approaches for dynamic large language model (LLM) serving suffer from inefficiencies due to coarse-grained key-value (KV) cache management and inflexible attention execution. To address these limitations, this work proposes Helios, a hybrid-bonded 3D-DRAM-based LLM serving accelerator that leverages a hardware-software co-design methodology. Helios introduces a spatial-aware KV cache allocation mechanism and customized inter-PE communication primitives to enable efficient execution of distributed block-wise attention. Experimental results demonstrate that Helios significantly improves both performance and energy efficiency: it achieves an average speedup of 3.25Γ and 3.36Γ higher energy efficiency compared to state-of-the-art GPU and NMP baselines, while reducing per-token generation latency by up to 72% at P50 and 76% at P99.
π Abstract
Large language models (LLMs) have been widely deployed for online generative services, where numerous LLM instances jointly handle workloads with fluctuating request arrival rates and variable request lengths. To efficiently execute coexisting compute-intensive and memory-intensive operators, near-memory processing (NMP) based computing paradigm has been extensively proposed. However, existing NMP designs adopt coarse-grained KV cache management and inflexible attention execution flow. Such limitations hinder these proposals from efficiently handling \textit{highly dynamic} LLM serving workloads, limiting their ability to accelerate LLM serving. To tackle these problems, we propose Helios, a Hybrid-bonding-based \uline{L}LM \uline{S}erving accelerator. Helios aims to bridge the fundamental gap between the dynamic nature of KV cache management in LLM serving and the distributed, non-uniform memory abstraction among NMP processing engines (PEs). To this end, we design both the intra-PE execution flow and the inter-PE communication primitives for distributed tiled attention execution. We further propose \textit{spatially-aware} KV cache allocation mechanism to balance the attention workload distribution while minimizing the inter-PE data transfer overhead. Compared with existing GPU/NMP designs, Helios achieves 3.25 times (geomean) speedup and 3.36 times (geomean) better energy efficiency, along with up to 72%/76% P50/P99 time-between-tokens degradation.