🤖 AI Summary
To address memory bandwidth and capacity bottlenecks during LLM decoding on edge NPU devices, this paper proposes the first NPU–PIM co-computing–oriented column-major tiling data layout and configurable DRAM address mapping mechanism. Our approach resolves three key challenges—data layout mismatch, bandwidth underutilization, and redundant storage—simultaneously and without additional storage overhead, via tile-based columnar data placement, memory affinity optimization, and dynamic address remapping. Evaluated on OPT-family models, it achieves a 3.0× reduction in first-token latency and a 2.18× reduction in last-token latency, significantly improving end-to-end inference throughput. This work establishes a scalable, system-level co-design paradigm for efficient PIM acceleration of edge-deployed LLMs.
📝 Abstract
Large Language Models (LLMs) are increasingly deployed on edge devices with Neural Processing Units (NPUs), yet the decode phase remains memory-intensive, limiting performance. Processing-in-Memory (PIM) offers a promising solution, but co-executing NPU-PIM systems face challenges such as data layout mismatches, bandwidth loss, and redundant storage. To address these issues, we propose UMDAM, a unified memory-affinity data layout and DRAM address mapping scheme tailored for NPU-PIM co-execution. UMDAM employs a column-major, tile-based layout and a configurable DRAM mapping strategy to ensure compatibility with NPU computation while maximizing PIM efficiency -- without introducing extra memory overhead or bandwidth loss. Comprehensive evaluations on OPT models demonstrate that UMDAM reduces time-to-first-token (TTFT) by up to 3.0x and time-to-last-token (TTLT) by 2.18x, significantly improving end-to-end LLM inference efficiency on edge devices.