🤖 AI Summary
Modern GPU SIMT programming models exhibit a semantic gap with underlying hardware task-parallelism, forcing developers to manually implement warp-level specialization and inter-warp communication—resulting in high development overhead and error-prone code. This paper proposes a compiler-driven approach to automated warp specialization. We introduce *asynchronous references* (arefs) as an intermediate representation that uniformly models inter-warp data dependencies and asynchronous communication. Leveraging high-level tiling annotations, our compiler automatically infers producer-consumer roles without modifying kernel source code. By integrating dataflow pipeline optimization with hardware-aware scheduling, we generate high-performance LLM operator kernels for NVIDIA H100. Experiments show our generated GEMM achieves 2.1× the performance of cuBLAS; our attention kernel outperforms Triton by 1.2×; and both match the performance of hand-tuned CUTLASS kernels.
📝 Abstract
Modern GPUs feature specialized hardware units that enable high-performance, asynchronous dataflow execution. However, the conventional SIMT programming model is fundamentally misaligned with this task-parallel hardware, creating a significant programmability gap. While hardware-level warp specialization is the key to unlocking peak performance, it forces developers to manually orchestrate complex, low-level communication and software pipelines--a process that is labor-intensive, error-prone, and unsustainable. To address this challenge, we present Tawa, an automated compiler that systematically generates high-performance, warp-specialized code from a high-level, tile-based program. Central to our approach is a novel IR abstraction, asynchronous references (aref), which expresses warp-level communication without exposing low-level hardware details. Using this abstraction, Tawa automatically partitions programs into producer-consumer roles and manages the intricate dataflow pipeline, relieving developers of invasive kernel rewriting. Evaluation on NVIDIA H100 GPUs across representative LLM kernels shows that Tawa delivers high hardware utilization, achieving up to 1.1$ imes$ speedup over highly optimized cuBLAS GEMM kernels. For attention workloads, Tawa attains 1.2$ imes$ speedup over Triton and matches the performance of the hand-optimized CUTLASS C++ FlashAttention-3 kernel with far less programming effort.