EvoEngineer: Mastering Automated CUDA Kernel Code Evolution with Large Language Models

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
CUDA kernel optimization remains a critical bottleneck for AI training and inference performance, yet existing LLM-based code optimization approaches struggle to simultaneously ensure correctness and efficiency, lacking both a unified problem formulation and standardized evaluation benchmarks. Method: This paper formalizes CUDA kernel optimization as a constrained code evolution task and introduces EvoEngineer—the first dedicated framework integrating large language models (LLMs), evolutionary algorithms, and CUDA programming model constraints, augmented with an optimization strategy guidance mechanism to preserve semantic correctness while maximizing performance gains. Contribution/Results: Evaluated on 91 real-world CUDA kernels, EvoEngineer achieves a median speedup of 2.72×, a valid code generation rate of 69.8%, and a peak speedup of 36.75×. Among 50 cases achieving over 2× acceleration, it attains optimal performance in 28—demonstrating state-of-the-art efficacy and robustness in automated CUDA kernel optimization.

Technology Category

Application Category

📝 Abstract
CUDA kernel optimization has become a critical bottleneck for AI performance, as deep learning training and inference efficiency directly depends on highly optimized GPU kernels. Despite the promise of Large Language Models (LLMs) for automating kernel optimization, this field suffers from a fragmented ecosystem of isolated and incomparable approaches with unclear problem formulations. Furthermore, general-purpose LLM code evolution methods cannot meet strict correctness requirements of CUDA kernel optimization. We address these fundamental challenges by first formalizing CUDA kernel optimization as a code optimization task with a clear objective, constraints, and evaluation metrics. We then establish the first systematic LLM-based code evolution framework, EvoEngineer, that provides guidance for designing and adapting optimization strategies to achieve a balance between performance and correctness. Finally, we implement a kernel optimization system based on this framework and conduct extensive experiments on 91 real-world CUDA kernels. Our results demonstrate that EvoEngineer achieves a principled balance between performance and correctness, with the highest averaged median speedup of extbf{2.72}$ imes$ over baseline CUDA kernels and a code validity rate of extbf{69.8}%, outperforming existing methods on both dimensions. Our method achieves a maximum speedup of extbf{36.75}$ imes$ among all operations over PyTorch kernels and delivers the highest speedup on extbf{28} ( extbf{56.0%}) of 50 operations that achieve over extbf{2$ imes$} acceleration.
Problem

Research questions and friction points this paper is trying to address.

Formalizing CUDA kernel optimization with clear objectives and constraints
Developing systematic LLM framework for performance-correctness balance
Automating GPU kernel evolution to overcome fragmented optimization approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalizes CUDA kernel optimization with clear objectives
Establishes systematic LLM-based code evolution framework
Implements optimization system balancing performance and correctness
P
Ping Guo
Department of Computer Science, City University of Hong Kong
C
Chenyu Zhu
Department of Computer Science, City University of Hong Kong
S
Siyuan Chen
Department of Computer Science, City University of Hong Kong
F
Fei Liu
Department of Computer Science, City University of Hong Kong
X
Xi Lin
Department of Computer Science, City University of Hong Kong
Zhichao Lu
Zhichao Lu
City University of Hong Kong
Evolutionary ComputationBilevel OptimizationNeural Architecture Search
Qingfu Zhang
Qingfu Zhang
Chair Professor, FIEEE, City University of Hong Kong
evolutionary computationmultiobjective optimizationcomputational intelligence