🤖 AI Summary
This work addresses the challenges of existing CPU matrix extensions, which struggle to efficiently adapt across diverse architectures due to high hardware/software overhead, tight coupling with the processor pipeline, and fine-grained synchronization requirements. To overcome these limitations, the authors propose a unified and configurable CPU matrix extension architecture that decouples the matrix unit from the pipeline, enabling asynchronous execution, flexible-granularity matrix-multiply abstractions, and mixed-precision computation. This design achieves low-overhead integration while maintaining synergy with existing compute and memory resources. Evaluated on four open-source CPU platforms, the approach attains over 90% GEMM utilization. It outperforms Intel AMX by 1.57× on ResNet, 1.57× on BERT, and 2.31× on Llama3. A 4 TOPS@2GHz matrix unit implemented in 14nm technology occupies only 0.53 mm².
📝 Abstract
Matrix extensions have emerged as an essential feature in modern CPUs to address the surging demands of AI workloads. However, existing designs often incur substantial hardware and software design overhead. Tight coupling with the CPU pipeline complicates integration across diverse CPUs, while fine-grained synchronous instructions hinder the development of high-performance kernels.
This paper proposes a unified and configurable CPU matrix extension architecture. By decoupling matrix units from the CPU pipeline, the design enables low-overhead integration while maintaining close coordination with existing compute and memory resources. The configurable matrix unit supports mixed-precision operations and adapts to diverse compute demands and memory bandwidth constraints. An asynchronous matrix multiplication abstraction with flexible granularity conceals hardware details, simplifies matrix-vector overlap, and supports a unified software stack.
The architecture is integrated into four open-source CPU RTL platforms and evaluated on representative AI models. Matrix unit utilization under GEMM workloads exceeds 90% across all platforms. When configured with compute throughput and memory bandwidth comparable to Intel AMX, our design achieves speedups of 1.57x, 1.57x, and 2.31x on ResNet, BERT, and Llama3, with over 30% of the gains attributed to overlapped matrix-vector execution. A 4 TOPS@2GHz matrix unit occupies only 0.53 mm\textsuperscript{2} in 14nm CMOS. These results demonstrate strong cross-platform adaptability and effective hardware-software co-optimization, offering a practical matrix extension for the open-source community.