🤖 AI Summary
To address the dual bottlenecks of memory capacity and bandwidth in large language model (LLM) inference, this paper proposes H2M2, a hardware-software co-designed heterogeneous memory management architecture. Methodologically, it introduces (1) a novel dynamic runtime kernel-memory mapping algorithm tailored to LLM workload characteristics, enabling precise scheduling of compute-intensive and bandwidth-sensitive kernels to capacity-optimized or bandwidth-optimized memory modules; and (2) an asymmetric heterogeneous memory architecture augmented with in-memory computation units, coupled with a unified memory abstraction layer that provides consistent programming interfaces across memory types and enables GPU-aware multi-level memory coordination. Evaluated on GPT-3-175B, Chinchilla-70B, and Llama2-70B, H2M2 achieves 1.46×, 1.55×, and 2.94× inference speedup over LPDDR-based homogeneous systems, respectively, while significantly improving energy efficiency and cost-effectiveness.
📝 Abstract
A large language model (LLM) is one of the most important emerging machine learning applications nowadays. However, due to its huge model size and runtime increase of the memory footprint, LLM inferences suffer from the lack of memory capacity in conventional systems consisting of multiple GPUs with a modest amount of high bandwidth memory. Moreover, since LLM contains many bandwidthintensive kernels, only focusing on the memory capacity without considering the bandwidth incurs a serious performance degradation. To handle such conflicting memory capacity and bandwidth demands in a cost-effective way, this study investigates the potential of heterogeneous memory systems, proposing H2M2. It uses an asymmetric memory architecture consisting of capacity-centric and bandwidthcentric memory with computation units attached to each memory device. With the asymmetric memory, we first analyze the effect of kernel-memory mapping for the asymmetric memory. Second, we propose a dynamic runtime algorithm that finds a mapping solution considering the characteristics of LLM operations and the change of footprint during LLM inference. Third, we advocate the need for memory abstraction for the efficient management of the asymmetric memory. H2M2 outperforms the conventional homogeneous memory system with LPDDR by 1.46x, 1.55x, and 2.94x speedup in GPT3-175B, Chinchilla-70B, and Llama2-70B, respectively.