🤖 AI Summary
GPU-accelerated triangular matrix–matrix multiplication (TRMM) and triangular solve (TRSM) kernels suffer from poor portability and inconsistent performance across heterogeneous GPU architectures (NVIDIA, AMD, Apple Silicon).
Method: This paper proposes a unified algorithmic framework based on recursive divide-and-conquer and GEMM redirection. It reformulates TRMM/TRSM as sequences of standard GEMM calls, optimizes memory access patterns and compute–memory overlap, and leverages Julia’s multiple dispatch and metaprogramming to achieve hardware-agnostic abstraction.
Contribution/Results: The implementation—requiring only ~500 lines of code—enables single-API deployment across all target platforms. It delivers the first high-performance TRMM/TRSM implementation on Apple Silicon GPUs; for large matrices, throughput approaches that of cuBLAS and rocBLAS, while cross-architecture performance variance is substantially reduced. This work breaks the long-standing trade-off between high performance and high portability in GPU linear algebra kernels.
📝 Abstract
This paper presents a performant and portable recursive implementation of triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM) in Julia for GPUs, two kernels that underlie many linear-algebra algorithms. We restructure TRMM and TRSM so that most work is executed as general matrix-matrix multiplication (GEMM), improving use of the GPU memory hierarchy and reducing latency. Exploiting Julia's multiple dispatch and metaprogramming together with the GPUArrays and KernelAbstractions frameworks, we expose a single hardware-agnostic API that runs on NVIDIA, AMD, and Apple Silicon GPUs. For large matrices the recursive code reaches throughput comparable to vendor libraries such as cuBLAS and rocBLAS, while providing these routines on Apple Silicon for the first time. The entire implementation is only a few hundred lines of code, showing that unified Julia programs can deliver near-vendor performance across heterogeneous architectures.