🤖 AI Summary
This work addresses the challenge of predicting execution performance for GPU warp-specialized kernels. We propose the first end-to-end performance model based on differential equations, jointly characterizing key factors including warp size, tiling dimensions, matrix dimensions, memory bandwidth, and thread divergence. The model is rigorously validated through both architectural analysis and empirical CUDA kernel measurements, augmented by a detailed bandwidth model. Its key innovation lies in the first application of differential equations to warp-level performance modeling, enabling quantitative characterization of the mapping between warp-level parallelism structures and performance bottlenecks. Experimental evaluation demonstrates a prediction error of less than 8.2%. The model supports compiler-driven auto-tuning and adaptive parameter configuration, achieving a 17% improvement in energy efficiency for sparse computation and GEMM workloads.
📝 Abstract
This paper presents a performance model tailored for warp specialization kernels, focusing on factors such as warp size, tilling size, input matrix size, memory bandwidth, and thread divergence. Our model offers accurate predictions of execution time by leveraging differential equations validated through simulations and experiments. The insights gained from this model not only enhance our understanding of warp specialization techniques but also have practical implications for optimizing GPU-accelerated applications through compiler optimizations, kernel parameter tuning, and algorithm design.