KPerfIR: Towards an Open and Compiler-centric Ecosystem for GPU Kernel Performance Tooling on Modern AI Workloads

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modern GPUs lack customizable, scalable, and cross-platform performance profiling tools tailored for AI workloads. Method: This paper introduces a compiler-native profiling paradigm that deeply integrates fine-grained performance analysis into the Triton compilation stack. Leveraging a multi-level IR design, it implements kernel-level profiling via compiler passes that jointly orchestrate static instrumentation and dynamic sampling. Contribution/Results: Our approach enables, for the first time, accurate modeling and diagnosis of complex optimizations—including instruction-level parallelism, memory access patterns, and compute-memory overlap—with only 8.2% runtime overhead and ≤2% measurement error. Compared to conventional profilers, it significantly improves interpretability and tunability of AI kernel performance bottlenecks. The resulting infrastructure provides lightweight, high-fidelity, end-to-end performance insights across the AI systems software stack.

Technology Category

Application Category

📝 Abstract
In this work, we propose KPerfIR, a novel multilevel compiler-centric infrastructure to enable the development of customizable, extendable, and portable profiling tools tailored for modern artificial intelligence (AI) workloads on modern GPUs. Our approach integrates profiling capabilities directly into the compiler workflow, allowing profiling functionalities to be implemented as compiler passes, offering a programmable and reusable framework for performance analysis. This design bridges the gap between compilers and profilers, enabling fine-grained insights into complex optimization challenges such as overlapping the execution of fine-grained function units on GPUs. KPerfIR is integrated into the Triton infrastructure to highlight the power of a compiler-centric approach to advance performance analysis and optimization in the ever-evolving landscape of AI compilers. Our evaluation shows that our tool incurs low overhead (8.2%), provides accurate measurements (2% relative error), and delivers actionable insights into complicated GPU intra-kernel optimizations.
Problem

Research questions and friction points this paper is trying to address.

Enabling customizable GPU profiling for modern AI workloads
Bridging compiler-profilers gap for fine-grained GPU insights
Reducing overhead in GPU kernel performance analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compiler-centric profiling for modern AI workloads
Programmable framework via compiler passes
Low-overhead accurate GPU optimization insights
🔎 Similar Papers
No similar papers found.