π€ AI Summary
Existing LLM-driven approaches for GPU kernel optimization are largely confined to machine learning scenarios, lacking cross-domain generality and a systematic evaluation benchmark. To address this gap, this work proposes CUDAMasterβa hardware-aware, multi-agent automated optimization framework that integrates performance profiling with an automated compilation toolchain to generate CUDA kernels across diverse workloads under FP32 and BF16 precision. Furthermore, we introduce MSKernelBench, the first comprehensive benchmark encompassing algebraic operations, LLM operators, sparse matrix computations, and scientific computing. Experimental results demonstrate that CUDAMaster consistently outperforms Astra by an average of approximately 35% across most operators, with certain kernels matching or even surpassing the performance of the proprietary cuBLAS library.
π Abstract
Optimizing GPU kernels manually is a challenging and time-consuming task. With the rapid development of LLMs, automated GPU kernel optimization is gradually becoming a tangible reality. However, current LLM-driven automated optimization methods narrowly focus on machine learning applications, such as PyTorch operator optimization, while overlooking broader domains like sparse matrix operations in scientific computing. Extending to these broader applications brings new challenges for the benchmark and algorithm. Therefore, developing a general-purpose automated kernel optimization method becomes our primary focus. In this paper, we address the absence of systematic evaluation for multi-scenario settings by introducing MSKernelBench, which spans multiple scenarios, including fundamental algebraic operations, common LLM kernels, sparse matrix operators, and scientific computing routines, each supporting both FP32 and BF16 precision. Building on this benchmark, we introduce CUDAMaster, a multi-agent, hardware-aware system for kernel optimization that leverages profiling information and automatically constructs the full compilation and execution toolchain. Experimental results demonstrate that CUDAMaster achieves significant speedups across most operators, outperforming Astra by about 35%. In several cases, its performance matches or surpasses that of highly optimized, closed-source libraries such as cuBLAS. A demo showcasing the original and optimized code for each operator is available at https://hanyx2021.github.io/MSKernelBenchDemo/.