🤖 AI Summary
Existing GPU libraries rely on manually compiled fused kernels, hindering flexible support for horizontal and vertical fusion (HF/VF), resulting in suboptimal on-chip SRAM utilization and frequent spilling of intermediate data to global memory. This work proposes an automatic kernel fusion framework based on C++17 template metaprogramming: users declaratively define composable GPU function components, and the system statically infers and generates optimal fused kernels at compile time—natively supporting both HF and VF while eliminating unnecessary global memory transfers. The approach requires no precompiled templates or hand-written fusion code, drastically reducing development overhead and enhancing programming flexibility and hardware resource efficiency. An open-source implementation demonstrates 2×–1000× speedups over state-of-the-art GPU libraries across diverse benchmarks, achieving, for the first time, a principled unification of high performance and high programmability.
📝 Abstract
Existing GPU libraries often struggle to fully exploit the parallel resources and on-chip memory (SRAM) of GPUs when chaining multiple GPU functions as individual kernels. While Kernel Fusion (KF) techniques like Horizontal Fusion (HF) and Vertical Fusion (VF) can mitigate this, current library implementations often require library developers to manually create fused kernels. Hence, library users rely on limited sets of pre-compiled or template-based fused kernels. This limits the use cases that can benefit from HF and VF and increases development costs. In order to solve these issues, we present a novel methodology for building GPU libraries that enables automatic on-demand HF and VF for arbitrary combinations of GPU library functions. Our methodology defines reusable, fusionable components that users combine via high-level programming interfaces. Leveraging C++17 metaprogramming features available in compilers like nvcc, our methodology generates a single and optimized fused kernel tailored to the user's specific sequence of operations at compile time, without needing a custom compiler or manual development and pre-compilation of kernel combinations. This approach abstracts low-level GPU complexities while maximizing GPU resource utilization and keeping intermediate data in SRAM. We provide an open-source implementation demonstrating significant speedups compared to traditional libraries in various benchmarks, validating the effectiveness of this methodology for improving GPU performance in the range of 2x to more than 1000x, while preserving high-level programmability.