🤖 AI Summary
Modern GPUs lack customizable, scalable, and cross-platform performance profiling tools tailored for AI workloads.
Method: This paper introduces a compiler-native profiling paradigm that deeply integrates fine-grained performance analysis into the Triton compilation stack. Leveraging a multi-level IR design, it implements kernel-level profiling via compiler passes that jointly orchestrate static instrumentation and dynamic sampling.
Contribution/Results: Our approach enables, for the first time, accurate modeling and diagnosis of complex optimizations—including instruction-level parallelism, memory access patterns, and compute-memory overlap—with only 8.2% runtime overhead and ≤2% measurement error. Compared to conventional profilers, it significantly improves interpretability and tunability of AI kernel performance bottlenecks. The resulting infrastructure provides lightweight, high-fidelity, end-to-end performance insights across the AI systems software stack.
📝 Abstract
In this work, we propose KPerfIR, a novel multilevel compiler-centric infrastructure to enable the development of customizable, extendable, and portable profiling tools tailored for modern artificial intelligence (AI) workloads on modern GPUs. Our approach integrates profiling capabilities directly into the compiler workflow, allowing profiling functionalities to be implemented as compiler passes, offering a programmable and reusable framework for performance analysis. This design bridges the gap between compilers and profilers, enabling fine-grained insights into complex optimization challenges such as overlapping the execution of fine-grained function units on GPUs. KPerfIR is integrated into the Triton infrastructure to highlight the power of a compiler-centric approach to advance performance analysis and optimization in the ever-evolving landscape of AI compilers. Our evaluation shows that our tool incurs low overhead (8.2%), provides accurate measurements (2% relative error), and delivers actionable insights into complicated GPU intra-kernel optimizations.