Astra: A Multi-Agent System for GPU Kernel Performance Optimization

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
GPU kernel performance optimization traditionally relies on labor-intensive manual tuning; existing compilers and LLM-based approaches primarily target high-level framework-to-CUDA translation, lacking capability to directly optimize deployed CUDA code. Method: We propose the first LLM-based multi-agent collaborative framework that takes real-world CUDA kernels as input and autonomously performs iterative generation, compilation, performance profiling, loop transformations, memory access optimization, and injection of CUDA intrinsics and fast math functions—without human intervention or model fine-tuning, operating effectively under zero-shot prompting. Contribution/Results: The framework guarantees functional correctness and delivers measurable speedups. Evaluated on critical kernels in the SGLang inference framework, it achieves an average 1.32× speedup. This significantly reduces human effort and engineering barriers for GPU kernel optimization in LLM serving systems.

Technology Category

Application Category

📝 Abstract
GPU kernel optimization has long been a central challenge at the intersection of high-performance computing and machine learning. Efficient kernels are crucial for accelerating large language model (LLM) training and serving, yet attaining high performance typically requires extensive manual tuning. Compiler-based systems reduce some of this burden, but still demand substantial manual design and engineering effort. Recently, researchers have explored using LLMs for GPU kernel generation, though prior work has largely focused on translating high-level PyTorch modules into CUDA code. In this work, we introduce Astra, the first LLM-based multi-agent system for GPU kernel optimization. Unlike previous approaches, Astra starts from existing CUDA implementations extracted from SGLang, a widely deployed framework for serving LLMs, rather than treating PyTorch modules as the specification. Within Astra, specialized LLM agents collaborate through iterative code generation, testing, profiling, and planning to produce kernels that are both correct and high-performance. On kernels from SGLang, Astra achieves an average speedup of 1.32x using zero-shot prompting with OpenAI o4-mini. A detailed case study further demonstrates that LLMs can autonomously apply loop transformations, optimize memory access patterns, exploit CUDA intrinsics, and leverage fast math operations to yield substantial performance gains. Our work highlights multi-agent LLM systems as a promising new paradigm for GPU kernel optimization.
Problem

Research questions and friction points this paper is trying to address.

Optimizing GPU kernels for high-performance computing and machine learning
Reducing manual tuning effort in GPU kernel generation and optimization
Automating performance improvements in CUDA code through multi-agent LLM systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent LLM system optimizes GPU kernels
Iterative code generation and testing for performance
Leverages CUDA intrinsics and fast math operations
🔎 Similar Papers
No similar papers found.