🤖 AI Summary
Existing ARM linear algebra libraries fail to fully exploit the Scalable Matrix Extension (SME) architecture, particularly suffering from severe performance bottlenecks in large-scale General Matrix Multiplication (GEMM). To address this, we propose MpGEMM—the first open-source, high-performance GEMM library specifically designed for ARM SME. We conduct the first systematic microarchitectural characterization of SME and derive three key optimization principles: cache-aware blocking, dynamic on-the-fly transpose-and-pack, and fully register-resident tile-based microkernels. Leveraging multi-vector load instructions and fine-grained tile register scheduling, MpGEMM achieves a 1.23× speedup over Apple’s Accelerate framework on the Apple M4 Pro—outperforming leading open-source libraries. We further validate its effectiveness on realistic large-model workloads, including DeepSeek and LLaMA, demonstrating substantial end-to-end inference acceleration.
📝 Abstract
General Matrix Multiplication (GEMM) is a critical kernel in high-performance computing and deep learning. While modern architectures like ARM's Scalable Matrix Extension (SME) introduce dedicated hardware for matrix operations, existing linear algebra libraries fail to fully exploit its potential, particularly for large matrices. This paper presents MpGEMM, an open-source library that leverages key architectural features of SME to optimize GEMM across multiple precisions. Through a systematic characterization of SME, we derive optimization guidelines that inform our design. MpGEMM employs cache-aware partitioning, efficient data packing with on-the-fly transposition, and specialized micro-kernels that utilize multi-vector loads and all available tile registers. Evaluated on an Apple M4 Pro with real-world workloads from DeepSeek and LLaMA, MpGEMM achieves an average speedup of 1.23x over the vendor-optimized Apple Accelerate library and significantly outperforms other open-source alternatives.