🤖 AI Summary
This work addresses the suboptimal GPU (e.g., H100) inference performance of PyTorch, which stems from labor-intensive, manual CUDA kernel tuning. We propose an LLM-driven multi-agent collaborative optimization framework that automates CUDA kernel generation, tuning, and validation using the KernelBench benchmark. The framework employs specialized agents—including operator analyzer, kernel generator, error repairer, and validator—that coordinate to jointly optimize kernels. Our analysis reveals that a “utilization-first + error-repair” strategy combination yields optimal results, and that tuning granularity critically impacts acceleration. Experiments demonstrate an average 2.88× inference speedup across diverse mainstream PyTorch models on H100 GPUs, outperforming conventional compilers (e.g., Triton, TVM) without requiring hand-written custom kernels. This work establishes a novel paradigm for LLM-augmented system-level optimization.
📝 Abstract
Maximizing performance on available GPU hardware is an ongoing challenge for modern AI inference systems. Traditional approaches include writing custom GPU kernels and using specialized model compilers to tune high-level code for specific GPU targets. Recent work shows that LLM-based multi-agent systems can effectively perform such tuning, often outperforming existing compilers and eliminating the need for manual kernel development. However, the dynamics of multi-agent systems for this task remain unexplored. In this work, we present a logical framework for comparing multi-agent PyTorch optimization systems. Our evaluation shows that exploit-heavy strategies perform best when paired with error-fixing agents, and that performance correlates with the granularity of optimization steps. The best implementation achieves an average 2.88x speedup on an H100 GPU across diverse tasks in KernelBench, a benchmark suite covering a range of machine learning architectures in PyTorch.