Hardware-based Heterogeneous Memory Management for Large Language Model Inference

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual bottlenecks of memory capacity and bandwidth in large language model (LLM) inference, this paper proposes H2M2, a hardware-software co-designed heterogeneous memory management architecture. Methodologically, it introduces (1) a novel dynamic runtime kernel-memory mapping algorithm tailored to LLM workload characteristics, enabling precise scheduling of compute-intensive and bandwidth-sensitive kernels to capacity-optimized or bandwidth-optimized memory modules; and (2) an asymmetric heterogeneous memory architecture augmented with in-memory computation units, coupled with a unified memory abstraction layer that provides consistent programming interfaces across memory types and enables GPU-aware multi-level memory coordination. Evaluated on GPT-3-175B, Chinchilla-70B, and Llama2-70B, H2M2 achieves 1.46×, 1.55×, and 2.94× inference speedup over LPDDR-based homogeneous systems, respectively, while significantly improving energy efficiency and cost-effectiveness.

Technology Category

Application Category

📝 Abstract
A large language model (LLM) is one of the most important emerging machine learning applications nowadays. However, due to its huge model size and runtime increase of the memory footprint, LLM inferences suffer from the lack of memory capacity in conventional systems consisting of multiple GPUs with a modest amount of high bandwidth memory. Moreover, since LLM contains many bandwidthintensive kernels, only focusing on the memory capacity without considering the bandwidth incurs a serious performance degradation. To handle such conflicting memory capacity and bandwidth demands in a cost-effective way, this study investigates the potential of heterogeneous memory systems, proposing H2M2. It uses an asymmetric memory architecture consisting of capacity-centric and bandwidthcentric memory with computation units attached to each memory device. With the asymmetric memory, we first analyze the effect of kernel-memory mapping for the asymmetric memory. Second, we propose a dynamic runtime algorithm that finds a mapping solution considering the characteristics of LLM operations and the change of footprint during LLM inference. Third, we advocate the need for memory abstraction for the efficient management of the asymmetric memory. H2M2 outperforms the conventional homogeneous memory system with LPDDR by 1.46x, 1.55x, and 2.94x speedup in GPT3-175B, Chinchilla-70B, and Llama2-70B, respectively.
Problem

Research questions and friction points this paper is trying to address.

Addresses memory capacity limitations in LLM inference
Balances memory bandwidth and capacity demands cost-effectively
Proposes dynamic runtime algorithm for heterogeneous memory management
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asymmetric memory architecture for LLM inference
Dynamic runtime kernel-memory mapping algorithm
Memory abstraction for efficient heterogeneous management
S
Soojin Hwang
KAIST, Republic of Korea
J
Jungwoo Kim
Stanford University, California, USA
S
Sanghyeon Lee
KAIST, Republic of Korea
H
Hongbeen Kim
KAIST, Republic of Korea
Jaehyuk Huh
Jaehyuk Huh
KAIST
Computer ArchitectureOperating SystemsSystem Security