Transitive Array: An Efficient GEMM Accelerator with Result Reuse

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address memory and computational bottlenecks in GEMM operations during DNN/LLM inference, this work introduces “transitive sparsity”—a novel paradigm that models GEMM dependencies as a directed acyclic graph (DAG) and eliminates redundant multiplications via intermediate result reuse, enabling multiplication-free hardware acceleration. We propose the first multiplication-free array architecture supporting result reuse, integrating DAG-driven scheduling, multi-lane load balancing, and quantization-aware sparse computation flow. Evaluated on LLaMA-family models, our design achieves 7.46× and 3.97× inference speedup over Olive and BitVert, respectively, along with 2.31× and 1.65× energy-efficiency improvements—all without accuracy loss. Our core contribution is the formalization of transitive sparsity and the end-to-end hardware realization of its computational principle, establishing the first complete hardware-software co-designed闭环 for this paradigm.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks (DNNs) and Large Language Models (LLMs) have revolutionized artificial intelligence, yet their deployment faces significant memory and computational challenges, especially in resource-constrained environments. Quantization techniques have mitigated some of these issues by reducing data precision, primarily focusing on General Matrix Multiplication (GEMM). This study introduces a novel sparsity paradigm, transitive sparsity, which leverages the reuse of previously computed results to substantially minimize computational overhead in GEMM operations. By representing transitive relations using a directed acyclic graph, we develop an efficient strategy for determining optimal execution orders, thereby overcoming inherent challenges related to execution dependencies and parallelism. Building on this foundation, we present the Transitive Array, a multiplication-free accelerator designed to exploit transitive sparsity in GEMM. Our architecture effectively balances computational workloads across multiple parallel lanes, ensuring high efficiency and optimal resource utilization. Comprehensive evaluations demonstrate that the Transitive Array achieves approximately 7.46$ imes$ and 3.97$ imes$ speedup and 2.31$ imes$ and 1.65$ imes$ energy reduction compared to state-of-the-art accelerators such as Olive and BitVert while maintaining comparable model accuracy on LLaMA models.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational overhead in GEMM via transitive sparsity
Optimizing execution order for parallelism in matrix operations
Designing multiplication-free accelerator for efficient resource utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces transitive sparsity for GEMM efficiency
Uses directed acyclic graph for execution optimization
Develops multiplication-free Transitive Array accelerator
🔎 Similar Papers
No similar papers found.