🤖 AI Summary
Existing approaches for large language model (LLM)-generated GPU kernels struggle to simultaneously ensure correctness and achieve high performance, with limited gains from iterative refinement. This work proposes an agent-based closed-loop framework that integrates generation, testing, and optimization, introducing the CuTe abstraction layer to stably represent performance-critical structures. By incorporating workload-aware prompting, execution-driven validation, structured debugging, and a staged optimization strategy, the framework enables progressive improvement of generated kernels. Evaluated on matrix multiplication and activation function tasks, the resulting kernels are functionally correct and achieve performance comparable to highly optimized hand-tuned libraries.
📝 Abstract
High-performance GPU kernels are critical to modern machine learning systems, yet developing efficient implementations remains a challenging, expert-driven process due to the tight coupling between algorithmic structure, memory hierarchy usage, and hardware-specific optimizations. Recent work has explored using large language models (LLMs) to generate GPU kernels automatically, but generated implementations often struggle to maintain correctness and achieve competitive performance across iterative refinements. We present CuTeGen, an agentic framework for automated generation and optimization of GPU kernels that treats kernel development as a structured generate--test--refine workflow. Unlike approaches that rely on one-shot generation or large-scale search over candidate implementations, CuTeGen focuses on progressive refinement of a single evolving kernel through execution-based validation, structured debugging, and staged optimization. A key design choice is to generate kernels using the CuTe abstraction layer, which exposes performance-critical structures such as tiling and data movement while providing a more stable representation for iterative modification. To guide performance improvement, CuTeGen incorporates workload-aware optimization prompts and delayed integration of profiling feedback. Experimental results on matrix multiplication and activation workloads demonstrate that the framework produces functionally correct kernels and achieves competitive performance relative to optimized library implementations.