From Principles to Practice: A Systematic Study of LLM Serving on Multi-core NPUs

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low resource utilization and performance bottlenecks in large language model (LLM) inference on multicore neural processing units (NPUs) stem from inefficient tensor parallelism, suboptimal core placement, and coarse-grained memory management. Method: This paper proposes a systematic optimization framework: (i) a novel multi-level simulation infrastructure supporting transaction-level and cycle-accurate modeling to co-design architecture and deployment; (ii) joint optimization of tensor partitioning, physical core layout, and hierarchical memory management; and (iii) an adaptive mechanism for decoupled or fused parameter-data (PD) execution. Contribution/Results: Evaluated across diverse LLMs and NPU configurations, the approach achieves 1.32×–6.03× higher inference throughput over state-of-the-art methods, significantly improving both inference efficiency and hardware adaptability of multicore AI accelerators.

Technology Category

Application Category

📝 Abstract
With the widespread adoption of Large Language Models (LLMs), the demand for high-performance LLM inference services continues to grow. To meet this demand, a growing number of AI accelerators have been proposed, such as Google TPU, Huawei NPU, Graphcore IPU, and Cerebras WSE, etc. Most of these accelerators adopt multi-core architectures to achieve enhanced scalability, but lack the flexibility of SIMT architectures. Therefore, without careful configuration of the hardware architecture, as well as deliberate design of tensor parallelism and core placement strategies, computational resources may be underutilized, resulting in suboptimal inference performance. To address these challenges, we first present a multi-level simulation framework with both transaction-level and performance-model-based simulation for multi-core NPUs. Using this simulator, we conduct a systematic analysis and further propose the optimal solutions for tensor parallelism strategies, core placement policies, memory management methods, as well as the selection between PD-disaggregation and PD-fusion on multi-core NPUs. We conduct comprehensive experiments on representative LLMs and various NPU configurations. The evaluation results demonstrate that, our solution can achieve 1.32x-6.03x speedup compared to SOTA designs for multi-core NPUs across different hardware configurations. As for LLM serving, our work offers guidance on designing optimal hardware architectures and serving strategies for multi-core NPUs across various LLM workloads.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM inference performance on multi-core NPU architectures
Addressing computational underutilization through tensor parallelism strategies
Improving core placement and memory management for NPU efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-level simulation framework for NPU analysis
Optimized tensor parallelism and core placement strategies
Enhanced memory management and PD selection methods
🔎 Similar Papers
No similar papers found.
T
Tianhao Zhu
Institute of Parallel and Distributed Systems, Shanghai Jiao Tong University
D
Dahu Feng
Department of Precision Instrument, Tsinghua University
Erhu Feng
Erhu Feng
SHANG HAI JIAO TONG UNIVERSITY
MLSYSOperating SystemArchitecture
Yubin Xia
Yubin Xia
Professor, Shanghai Jiao Tong University
Operation SystemVirtualizationComputer ArchitectureSystem Security