Parallel Scan on Ascend AI Accelerators

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance bottlenecks of scan-type operators (e.g., sorting, tensor masking, top-k/p sampling) on Ascend AI accelerators, this paper proposes the first parallel prefix-sum algorithm deeply integrated with matrix multiplication. Our method innovatively orchestrates the cube unit—optimized for matrix accumulation—with the vector unit—designed for vectorized scanning—and introduces a multi-core memory-bandwidth-aware scheduling strategy. This work marks the first systematic exploitation of Ascend’s cube units for scan primitives. Experimental results demonstrate: (i) 5.0×–9.6× speedup for single-core scan operations; (ii) multi-core scan throughput reaching 37.5% of theoretical memory bandwidth; (iii) 3.3× acceleration for radix sort built upon our scan primitive; and (iv) broad performance improvements across key scan-derived operators in AI workloads.

Technology Category

Application Category

📝 Abstract
We design and implement parallel prefix sum (scan) algorithms using Ascend AI accelerators. Ascend accelerators feature specialized computing units - the cube units for efficient matrix multiplication and the vector units for optimized vector operations. A key feature of the proposed scan algorithms is their extensive use of matrix multiplications and accumulations enabled by the cube unit. To showcase the effectiveness of these algorithms, we also implement and evaluate several scan-based operators commonly used in AI workloads, including sorting, tensor masking, and top-$k$ / top-$p$ sampling. Our single-core results demonstrate substantial performance improvements, with speedups ranging from $5 imes$ to $9.6 imes$ compared to vector-only implementations for sufficiently large input lengths. Additionally, we present a multi-core scan algorithm that fully utilizes both the cube and vector units of Ascend, reaching up to 37.5% of the theoretical memory bandwidth. Furthermore, our radix sort implementation, which utilizes matrix multiplications for its parallel splits, showcases the potential of matrix engines to enhance complex operations, offering up to $3.3 imes$ speedup over the baseline.
Problem

Research questions and friction points this paper is trying to address.

Design parallel prefix sum algorithms for Ascend AI accelerators
Optimize scan-based AI operators using matrix multiplication units
Improve performance of sorting and sampling via parallel splits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel prefix sum using Ascend AI accelerators
Extensive use of cube unit matrix multiplications
Multi-core scan algorithm utilizing cube and vector units
🔎 Similar Papers
No similar papers found.
B
Bartlomiej Wroblewski
Computing Systems Lab, Huawei Zurich Research Center, Switzerland
G
Gioele Gottardo
Computing Systems Lab, Huawei Zurich Research Center, Switzerland
Anastasios Zouzias
Anastasios Zouzias
Huawei Research
Computer Science