🤖 AI Summary
CUDA kernel optimization remains a critical bottleneck for AI training and inference performance, yet existing LLM-based code optimization approaches struggle to simultaneously ensure correctness and efficiency, lacking both a unified problem formulation and standardized evaluation benchmarks. Method: This paper formalizes CUDA kernel optimization as a constrained code evolution task and introduces EvoEngineer—the first dedicated framework integrating large language models (LLMs), evolutionary algorithms, and CUDA programming model constraints, augmented with an optimization strategy guidance mechanism to preserve semantic correctness while maximizing performance gains. Contribution/Results: Evaluated on 91 real-world CUDA kernels, EvoEngineer achieves a median speedup of 2.72×, a valid code generation rate of 69.8%, and a peak speedup of 36.75×. Among 50 cases achieving over 2× acceleration, it attains optimal performance in 28—demonstrating state-of-the-art efficacy and robustness in automated CUDA kernel optimization.
📝 Abstract
CUDA kernel optimization has become a critical bottleneck for AI performance, as deep learning training and inference efficiency directly depends on highly optimized GPU kernels.
Despite the promise of Large Language Models (LLMs) for automating kernel optimization, this field suffers from a fragmented ecosystem of isolated and incomparable approaches with unclear problem formulations.
Furthermore, general-purpose LLM code evolution methods cannot meet strict correctness requirements of CUDA kernel optimization.
We address these fundamental challenges by first formalizing CUDA kernel optimization as a code optimization task with a clear objective, constraints, and evaluation metrics.
We then establish the first systematic LLM-based code evolution framework, EvoEngineer, that provides guidance for designing and adapting optimization strategies to achieve a balance between performance and correctness.
Finally, we implement a kernel optimization system based on this framework and conduct extensive experiments on 91 real-world CUDA kernels.
Our results demonstrate that EvoEngineer achieves a principled balance between performance and correctness, with the highest averaged median speedup of extbf{2.72}$ imes$ over baseline CUDA kernels and a code validity rate of extbf{69.8}%, outperforming existing methods on both dimensions.
Our method achieves a maximum speedup of extbf{36.75}$ imes$ among all operations over PyTorch kernels and delivers the highest speedup on extbf{28} ( extbf{56.0%}) of 50 operations that achieve over extbf{2$ imes$} acceleration.