🤖 AI Summary
To address the inability of AI accelerators such as Ascend—equipped solely with FP16 compute units—to natively execute FP32 general matrix multiplication (GEMM), this paper proposes a high-accuracy, high-performance FP32 GEMM emulation method. The approach comprises three key innovations: (1) a tunable-scaling-based FP32-to-FP16 decomposition with explicit error compensation; (2) term-wise accumulation, which significantly improves numerical stability in low-exponent regimes; and (3) cache-aware tiling coupled with double-buffered pipelining to enable computation–communication overlap and efficient hardware resource utilization. Evaluated on the Ascend 910A NPU, the method achieves 77% of the theoretical FP32-equivalent peak performance while matching native FP32 GEMM accuracy and, in certain scenarios, demonstrating superior numerical robustness.
📝 Abstract
Low-precision matrix engines, such as FP16 cube, offer high throughput but lack support for full-precision computation. In this work, we propose H2SGEMM, a high-performance algorithm for emulating FP32 general matrix-matrix multiplication (GEMM) using only FP16 computation units on a representative AI accelerator. The method decomposes each FP32 operand into two FP16 values and compensates for numerical errors through a tunable scaling strategy. A detailed analysis of numerical errors, including underflow conditions and precision loss, guides the selection of scaling parameters to preserve up to 22 bits of mantissa accuracy. We further investigate the effect of computation order on accuracy and demonstrate that a term-wise accumulation scheme improves numerical stability over conventional FP32 GEMM in low-exponent regimes. Finally, a cache-aware blocking strategy and double-buffered pipeline are introduced to overlap memory transfers with computation, enabling H2SGEMM to achieve up to 77% of the theoretical FP32-equivalent peak performance on Ascend 910A NPU lacking native FP32 support. Extensive numerical experiments confirm that our method not only recovers the accuracy of native FP32 GEMM but also exhibits superior numerical stability under certain conditions, due to its structured and error-aware computation order.