🤖 AI Summary
In PIM architectures, host processors (e.g., GPUs) and PIM cores exhibit conflicting data layout requirements: hosts favor cross-bank contiguous memory access for high bandwidth utilization, whereas PIM cores require dense, bank-local data placement to minimize data movement—causing frequent, costly data reordering during ML kernel execution and severely limiting both performance and programmability. This paper proposes the first unified tensor compilation framework that jointly optimizes data reordering and compute code. It introduces a multi-level PIM hardware abstraction model enabling co-design of data distribution and computation strategies across heterogeneous PIM backends (e.g., HBM-PIM, AttAcc). The framework integrates loop tiling-based mapping, PIM-customized code optimizations, and a predictive-model-guided efficient configuration search. Evaluation shows average speedups of 2.7× (HBM-PIM) and 5.75× (AttAcc) for single-core ML operators; end-to-end LLM inference achieves 4.88× average speedup on AttAcc for GPT-3 and LLaMA-2.
📝 Abstract
Processing-In-Memory (PIM) devices integrated with high-performance Host processors (e.g., GPUs) can accelerate memory-intensive kernels in Machine Learning (ML) models, including Large Language Models (LLMs), by leveraging high memory bandwidth at PIM cores. However, Host processors and PIM cores require different data layouts: Hosts need consecutive elements distributed across DRAM banks, while PIM cores need them within local banks. This necessitates data rearrangements in ML kernel execution that pose significant performance and programmability challenges, further exacerbated by the need to support diverse PIM backends. Current compilation approaches lack systematic optimization for diverse ML kernels across multiple PIM backends and may largely ignore data rearrangements during compute code optimization. We demonstrate that data rearrangements and compute code optimization are interdependent, and need to be jointly optimized during the tuning process. To address this, we design DCC, the first data-centric ML compiler for PIM systems that jointly co-optimizes data rearrangements and compute code in a unified tuning process. DCC integrates a multi-layer PIM abstraction that enables various data distribution and processing strategies on different PIM backends. DCC enables effective co-optimization by mapping data partitioning strategies to compute loop partitions, applying PIM-specific code optimizations and leveraging a fast and accurate performance prediction model to select optimal configurations. Our evaluations in various individual ML kernels demonstrate that DCC achieves up to 7.68x speedup (2.7x average) on HBM-PIM and up to 13.17x speedup (5.75x average) on AttAcc PIM backend over GPU-only execution. In end-to-end LLM inference, DCC on AttAcc accelerates GPT-3 and LLaMA-2 by up to 7.71x (4.88x average) over GPU.