π€ AI Summary
Current large language models (LLMs) predominantly address high-level software engineering tasks and exhibit limited capability in optimizing low-level operators such as CUDA kernels. Moreover, existing CUDA kernel generation benchmarks suffer from vulnerabilities and narrow scenario coverage, hindering rigorous evaluation of real-world generalization. To address these limitations, we propose the first end-to-end intelligent agent framework for automated CUDA kernel discovery and optimization. Our approach integrates LLM-driven code generation, evolutionary meta-generation strategies, formal kernel equivalence verification, and performance-based filtering. We further introduce robust-kbenchβa comprehensive, diverse, and robust evaluation benchmark. Experimental results demonstrate that our generated kernels consistently outperform native PyTorch implementations in both forward and backward passes, achieving substantial speedups. Additionally, our framework accurately detects erroneous kernels, significantly enhancing hardware-level verification reliability.
π Abstract
Recent advances in large language models (LLMs) demonstrate their effectiveness in scaling test-time compute for software engineering tasks. However, these approaches often focus on high-level solutions, with limited attention to optimizing low-level CUDA kernel implementations. Additionally, existing kernel generation benchmarks suffer from exploitable loopholes and insufficient diversity in testing conditions, hindering true generalization assessment. To address these limitations, we introduce robust-kbench, a new benchmark for rigorous evaluation of kernel performance and correctness across varied scenarios. Furthermore, we present a comprehensive agentic framework that automates CUDA kernel discovery, verification, and optimization. This pipeline enables frontier LLMs to translate torch code to CUDA kernels and iteratively improve their runtime within our robust evaluation setting. Our sequential workflow first translates PyTorch code into equivalent CUDA kernels. It then optimizes their runtime using a novel evolutionary meta-generation procedure tailored to the CUDA ecosystem, guided by LLM-based verifiers for correctness and efficient filtering. Evaluated on robust-kbench, our approach produces CUDA kernels outperforming torch implementations for practical applications, including forward and backward passes. It can fuse operations and deploy various runtime optimization strategies. The verifier workflow accurately classifies incorrect kernels, enhancing hardware verification efficiency.