🤖 AI Summary
To address performance bottlenecks of scan-type operators (e.g., sorting, tensor masking, top-k/p sampling) on Ascend AI accelerators, this paper proposes the first parallel prefix-sum algorithm deeply integrated with matrix multiplication. Our method innovatively orchestrates the cube unit—optimized for matrix accumulation—with the vector unit—designed for vectorized scanning—and introduces a multi-core memory-bandwidth-aware scheduling strategy. This work marks the first systematic exploitation of Ascend’s cube units for scan primitives. Experimental results demonstrate: (i) 5.0×–9.6× speedup for single-core scan operations; (ii) multi-core scan throughput reaching 37.5% of theoretical memory bandwidth; (iii) 3.3× acceleration for radix sort built upon our scan primitive; and (iv) broad performance improvements across key scan-derived operators in AI workloads.
📝 Abstract
We design and implement parallel prefix sum (scan) algorithms using Ascend AI accelerators. Ascend accelerators feature specialized computing units - the cube units for efficient matrix multiplication and the vector units for optimized vector operations. A key feature of the proposed scan algorithms is their extensive use of matrix multiplications and accumulations enabled by the cube unit. To showcase the effectiveness of these algorithms, we also implement and evaluate several scan-based operators commonly used in AI workloads, including sorting, tensor masking, and top-$k$ / top-$p$ sampling. Our single-core results demonstrate substantial performance improvements, with speedups ranging from $5 imes$ to $9.6 imes$ compared to vector-only implementations for sufficiently large input lengths. Additionally, we present a multi-core scan algorithm that fully utilizes both the cube and vector units of Ascend, reaching up to 37.5% of the theoretical memory bandwidth. Furthermore, our radix sort implementation, which utilizes matrix multiplications for its parallel splits, showcases the potential of matrix engines to enhance complex operations, offering up to $3.3 imes$ speedup over the baseline.