🤖 AI Summary
Deploying GEMM on tile-based multi-PE accelerators faces challenges of deployment complexity and deep hardware-software coupling. To address this, we propose an end-to-end automated deployment framework. Our approach introduces the novel “Design in Tiles” paradigm, integrating configurable execution modeling, hardware-aware automatic mapping, hierarchical tiling scheduling, and compute-memory co-optimized compilation. For the first time, we achieve superior PE utilization over NVIDIA GH200’s expert-tuned library on a large-scale 32×32 tile configuration. At FP8 precision, our framework delivers 1979 TFLOPS peak performance and accelerates diverse matrix shapes by 1.2–2.0× relative to GH200. This work bridges the compilation gap between configurable hardware architectures and high-level computational graphs, establishing a general, efficient, and scalable methodology for automatic mapping onto domain-specific accelerators.
📝 Abstract
Tile-based many-Processing Element (PE) accelerators can achieve competitive performance on General Matrix Multiplication (GEMM), but they are extremely hard to program, as their optimal software mapping is deeply coupled with hardware design which is unwieldy to manual deployment. We propose "Design in Tiles (DiT)", an automated framework connecting a deployment toolchain with a configurable executable model for these accelerators. For evaluation, we apply our framework to GEMM targeting a large acceleration configuration (e.g., 32x32 tiles, 1979 TFLOPS@FP8, 4 TB/s Bandwidth) comparable to an NVIDIA GH200. We achieve higher PE utilization than GH200 with its expert-tuned GEMM libraries, achieving 1.2-2.0x speedup across diverse matrix shapes.