D-Legion: A Scalable Many-Core Architecture for Accelerating Matrix Multiplication in Quantized LLMs

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational intensity and frequent memory accesses inherent in quantized large language model (LLM) inference by proposing D-Legion, a scalable multi-core acceleration architecture centered on an adaptive-precision systolic array that supports both dense and block-sparse matrix multiplication. Key innovations include a block-sparse windowing mechanism, parallel accumulators to minimize partial-sum storage, multicast scheduling to enhance inter-core data reuse, and fine-grained design space exploration, collectively enabling high energy efficiency and scalability. Evaluated on BitNet attention tasks, D-Legion achieves 8.2× lower latency and 3.8× less memory usage compared to state-of-the-art approaches. In a 32-core configuration, it outperforms TPUv4i by 2.5× in latency reduction, 2.3× in throughput improvement, and 2.7× in memory savings, reaching a peak performance of 135.68 TOPS.

Technology Category

Application Category

📝 Abstract
The performance gains obtained by large language models (LLMs) are closely linked to their substantial computational and memory requirements. Quantized LLMs offer significant advantages with extremely quantized models, motivating the development of specialized architectures to accelerate their workloads. This paper proposes D-Legion, a novel scalable many-core architecture, designed using many adaptive-precision systolic array cores, to accelerate matrix multiplication in quantized LLMs. The proposed architecture consists of a set of Legions where each Legion has a group of adaptive-precision systolic arrays. D-Legion supports multiple computation modes, including quantized sparse and dense matrix multiplications. The block structured sparsity is exploited within a fully-sparse, or partially-sparse windows. In addition, memory accesses of partial summations (psums) are spatially reduced through parallel accumulators. Furthermore, data reuse is maximized through optimized scheduling techniques by multicasting matrix tiles across the Legions. A comprehensive design space exploration is performed in terms of Legion/core granularity to determine the optimal Legion configuration. Moreover, D-Legion is evaluated on attention workloads from two BitNet models, delivering up to 8.2$\times$ lower latency, up to 3.8$\times$ higher memory savings, and up to 3$\times$ higher psum memory savings compared to state-of-the-art work. D-Legion, with eight Legions and 64 total cores, achieves a peak throughput of 135,68 TOPS at a frequency of 1 GHz. A scaled version of D-Legion, with 32 Legions, is compared to Google TPUv4i, achieving up to 2.5$\times$ lower total latency, up to 2.3$\times$ higher total throughput, and up to 2.7$\times$ higher total memory savings.
Problem

Research questions and friction points this paper is trying to address.

quantized LLMs
matrix multiplication
acceleration
many-core architecture
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive-precision systolic array
quantized LLM acceleration
block-structured sparsity
parallel accumulators
data reuse scheduling
🔎 Similar Papers
No similar papers found.
A
Ahmed J. Abdelmaksoud
Centre for Electronics Frontiers, Institute for Integrated Micro and Nano Systems, School of Engineering, The University of Edinburgh, EH9 3BF, Edinburgh, United Kingdom
C
Cristian Sestito
Centre for Electronics Frontiers, Institute for Integrated Micro and Nano Systems, School of Engineering, The University of Edinburgh, EH9 3BF, Edinburgh, United Kingdom
Shiwei Wang
Shiwei Wang
National University of Singapore, Research Fellow
Matrix optimization with applicationsPerturbation analysis
Themis Prodromakis
Themis Prodromakis
Regius Chair of Engineering, Centre for Electronics Frontiers, University of Edinburgh
NanotechnologyMemristorsNanoelectronicsSensorsPoint-of-care diagnostics