π€ AI Summary
Existing CUDA kernel auto-generation methods suffer from low efficiency, high computational overhead, and poor generalization. To address these limitations, this paper proposes CudaForgeβa training-free, multi-agent framework for automated CUDA kernel generation. Its core innovation lies in an LLM-driven dual-agent architecture comprising a Coder agent and a Judge agent, which collaboratively perform iterative optimization guided by hardware-level feedback (e.g., Nsight Compute), thereby emulating expert human development workflows. CudaForge supports heterogeneous GPU architectures (e.g., A100, H100) and interoperates with mainstream large language models. Evaluated end-to-end, it achieves a functional correctness rate of 97.6%, delivers an average 1.68Γ speedup over PyTorch baselines, and incurs only ~$0.3 per optimization iteration. Compared to state-of-the-art approaches, CudaForge demonstrates superior generality, significantly lower cost, and robust cross-architecture generalization.
π Abstract
Developing efficient CUDA kernels is increasingly critical for AI applications such as large-scale LLM training. However, manual kernel design is both costly and time-consuming, motivating automatic approaches that leverage LLMs for code generation. Existing methods for automatic kernel generation, however, often produce low-efficiency kernels, incur high computational overhead, and fail to generalize across settings. In this work, we propose CudaForge, a training-free multi-agent workflow for CUDA kernel generation and optimization. Our workflow is inspired by the iterative workflow of human experts, which contains steps such as developing initial kernels, testing correctness, analyzing hardware feedback, and iterative improvement. More specifically, CudaForge employs two LLM agents: a Coder and a Judge, that iteratively generate, correct, and optimize CUDA kernels, while integrating hardware feedback such as Nsight Compute (NCU) metrics. In extensive evaluations, we show that CudaForge, by leveraging base models like OpenAI-o3, achieves 97.6% correctness of generated kernels and an average 1.68$ imes$ speedup over PyTorch baselines, substantially surpassing state-of-the-art models including OpenAI-o3 and Kevin on KernelBench.Beyond accuracy and speed, CudaForge demonstrates strong generalization across GPUs (A100, RTX 6000, 4090, 3090) and base models (OpenAI-o3, GPT-5, gpt-oss-120B, Claude-Sonnet-4, QwQ-32B), while maintaining high efficiency. In particular, generating an optimized kernel takes about 26.5 minutes on one RTX6000 and incurs about $ 0.3 API cost, which is significantly cheaper than existing agentic work that costs 6 H100 hours and $ 5 API cost per kernel. Our results highlight that multi-agent, training-free workflows can enable cost-effective, generalizable, and high-performance CUDA kernel optimization. Code available at https://github.com/OptimAI-Lab/CudaForge