ADiP: Adaptive Precision Systolic Array for Matrix Multiplication Acceleration

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High computational and memory overheads of matrix multiplication in Transformer models necessitate efficient hardware acceleration. This paper proposes a reconfigurable systolic array architecture supporting dynamic precision adaptation. Its key contributions are: (1) an NxN adaptive-precision processing unit enabling symmetric/asymmetric matrix multiplication and mixed-precision computation; (2) a shared accumulator design that enhances computational density and data reuse; and (3) integration with multi-precision quantization for optimized hardware design space under 22 nm technology. Evaluated on GPT-2 Medium, BERT Large, and BitNet-1.58B, the architecture achieves up to 4× higher compute throughput, 53.6% lower latency, and 24.4% reduced energy consumption versus baseline accelerators. At a 64×64 array scale, it delivers a peak performance of 32.768 TOPS.

Technology Category

Application Category

📝 Abstract
Transformers are at the core of modern AI nowadays. They rely heavily on matrix multiplication and require efficient acceleration due to their substantial memory and computational requirements. Quantization plays a vital role in reducing memory usage, and can be exploited for computations by designing reconfigurable architectures that enhance matrix multiplication by dynamically adjusting the precision. This paper proposes ADiP, a novel adaptive-precision systolic array architecture designed for efficient matrix multiplication acceleration.The proposed architecture consists of NxN adaptive-precision processing elements (PEs) and shared accumulators. ADiP supports multiple computation modes, including symmetric single-matrix multiplication as well as asymmetric multi-matrix multiplication with a shared input matrix, thereby improving data-reuse and PE utilization. In addition, ADiP maximizes the computational density by adapting to different precisions, such as 8bitx8bit, 8bitx4bit, and 8bitx2bit. Analytical models are developed for ADiP architecture, including latency and throughput for versatile architecture configurations. A comprehensive hardware design space exploration is demonstrated using 22nm commercial technology, achieving up to a 4x higher computational throughput. Furthermore, ADiP is evaluated on different transformer workloads from GPT-2 Medium, BERT Large, and BitNet-1.58B models, delivering latency improvement up to 53.6%, and energy improvement up to 24.4% for BitNet-1.58B MHA workloads. At a 64x64 size with 4096 PEs, ADiP achieves a peak throughput of 8.192 TOPS, 16.384 TOPS, and 32.768 TOPS for 8bitx8bit, 8bitx4bit, and 8bitx2bit operations, respectively.
Problem

Research questions and friction points this paper is trying to address.

Accelerating matrix multiplication for transformer models
Designing adaptive-precision systolic array architecture
Enhancing computational throughput and energy efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive-precision systolic array for matrix multiplication
Multiple computation modes with shared accumulators
Dynamic precision adaptation for higher computational density
🔎 Similar Papers
No similar papers found.
A
Ahmed J. Abdelmaksoud
Centre for Electronics Frontiers, Institute for Integrated Micro and Nano Systems, School of Engineering, The University of Edinburgh, EH9 3BF, Edinburgh, United Kingdom
C
Cristian Sestito
Centre for Electronics Frontiers, Institute for Integrated Micro and Nano Systems, School of Engineering, The University of Edinburgh, EH9 3BF, Edinburgh, United Kingdom
Shiwei Wang
Shiwei Wang
National University of Singapore, Research Fellow
Matrix optimization with applicationsPerturbation analysis
Themis Prodromakis
Themis Prodromakis
Regius Chair of Engineering, Centre for Electronics Frontiers, University of Edinburgh
NanotechnologyMemristorsNanoelectronicsSensorsPoint-of-care diagnostics