🤖 AI Summary
Existing approaches to automatic GPU program generation are largely confined to single-kernel optimization and struggle to jointly optimize host-side configurations and overall performance. This work proposes StitchCUDA, the first framework that integrates multi-agent collaboration with rule-based reinforcement learning, featuring a planner, coder, and verifier to enable automated implementation of advanced CUDA techniques such as custom kernel fusion and cuBLAS epilogue operations while effectively mitigating reward hacking. Evaluated on KernelBench, StitchCUDA achieves nearly 100% end-to-end task success rate and demonstrates 1.72× and 2.73× performance improvements over state-of-the-art multi-agent and reinforcement learning baselines, respectively.
📝 Abstract
Modern machine learning (ML) workloads increasingly rely on GPUs, yet achieving high end-to-end performance remains challenging due to dependencies on both GPU kernel efficiency and host-side settings. Although LLM-based methods show promise on automated GPU kernel generation, prior works mainly focus on single-kernel optimization and do not extend to end-to-end programs, hindering practical deployment.
To address the challenge, in this work, we propose StitchCUDA, a multi-agent framework for end-to-end GPU program generation, with three specialized agents: a Planner to orchestrate whole system design, a Coder dedicated to implementing it step-by-step, and a Verifier for correctness check and performance profiling using Nsys/NCU. To fundamentally improve the Coder's ability in end-to-end GPU programming, StitchCUDA integrates rubric-based agentic reinforcement learning over two atomic skills, task-to-code generation and feedback-driven code optimization, with combined rubric reward and rule-based reward from real executions. Therefore, the Coder learns how to implement advanced CUDA programming techniques (e.g., custom kernel fusion, cublas epilogue), and we also effectively prevent Coder's reward hacking (e.g., just copy PyTorch code or hardcoding output) during benchmarking. Experiments on KernelBench show that StitchCUDA achieves nearly 100% success rate on end-to-end GPU programming tasks, with 1.72x better speedup over the multi-agent baseline and 2.73x than the RL model baselines.