๐ค AI Summary
To address the lack of an efficient AI kernel development paradigm for AMD GPUs (CDNA architecture), this paper introduces HKโthe first high-performance AI operator programming framework tailored for AMD. Methodologically, it systematically identifies CDNA-applicable programming primitives, designs block-based explicit memory management, fine-grained asynchronous execution, and worker-coordination mechanisms, and implements a C++-embedded domain-specific language (DSL) to establish a vendor-portable software abstraction layer. Contributions include: (1) the first AMD-specific high-performance programming model, breaking the NVIDIA-centric DSL monopoly; and (2) near-optimal or even assembly-level performance on core operatorsโe.g., attention (d=64) and grouped-query attention (GQA) backward pass achieve 1.2โ2.4ร speedup over state-of-the-art baselines, significantly outperforming compiler-generated code.
๐ Abstract
AMD GPUs offer state-of-the-art compute and memory bandwidth; however, peak performance AMD kernels are written in raw assembly. To address the difficulty of mapping AI algorithms to hardware, recent work proposes C++ embedded and PyTorch-inspired domain-specific languages like ThunderKittens (TK) to simplify high performance AI kernel development on NVIDIA hardware. We explore the extent to which such primitives -- for explicit tile-based programming with optimized memory accesses and fine-grained asynchronous execution across workers -- are NVIDIA-specific or general. We provide the first detailed study of the programming primitives that lead to performant AMD AI kernels, and we encapsulate these insights in the HipKittens (HK) programming framework. We find that tile-based abstractions used in prior DSLs generalize to AMD GPUs, however we need to rethink the algorithms that instantiate these abstractions for AMD. We validate the HK primitives across CDNA3 and CDNA4 AMD platforms. In evaluations, HK kernels compete with AMD's hand-optimized assembly kernels for GEMMs and attention, and consistently outperform compiler baselines. Moreover, assembly is difficult to scale to the breadth of AI workloads; reflecting this, in some settings HK outperforms all available kernel baselines by $1.2-2.4 imes$ (e.g., $d=64$ attention, GQA backwards, memory-bound kernels). These findings help pave the way for a single, tile-based software layer for high-performance AI kernels that translates across GPU vendors. HipKittens is released at: https://github.com/HazyResearch/HipKittens.